Nov 25 04:00:19 np0005534696 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 04:00:19 np0005534696 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 04:00:19 np0005534696 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 04:00:19 np0005534696 kernel: BIOS-provided physical RAM map:
Nov 25 04:00:19 np0005534696 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 04:00:19 np0005534696 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 04:00:19 np0005534696 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 04:00:19 np0005534696 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Nov 25 04:00:19 np0005534696 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Nov 25 04:00:19 np0005534696 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Nov 25 04:00:19 np0005534696 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Nov 25 04:00:19 np0005534696 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 04:00:19 np0005534696 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 04:00:19 np0005534696 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Nov 25 04:00:19 np0005534696 kernel: NX (Execute Disable) protection: active
Nov 25 04:00:19 np0005534696 kernel: APIC: Static calls initialized
Nov 25 04:00:19 np0005534696 kernel: SMBIOS 2.8 present.
Nov 25 04:00:19 np0005534696 kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Nov 25 04:00:19 np0005534696 kernel: Hypervisor detected: KVM
Nov 25 04:00:19 np0005534696 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 04:00:19 np0005534696 kernel: kvm-clock: using sched offset of 2849165767 cycles
Nov 25 04:00:19 np0005534696 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 04:00:19 np0005534696 kernel: tsc: Detected 2445.406 MHz processor
Nov 25 04:00:19 np0005534696 kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Nov 25 04:00:19 np0005534696 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 04:00:19 np0005534696 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 04:00:19 np0005534696 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Nov 25 04:00:19 np0005534696 kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Nov 25 04:00:19 np0005534696 kernel: Using GB pages for direct mapping
Nov 25 04:00:19 np0005534696 kernel: RAMDISK: [mem 0x2ed25000-0x3368afff]
Nov 25 04:00:19 np0005534696 kernel: ACPI: Early table checksum verification disabled
Nov 25 04:00:19 np0005534696 kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Nov 25 04:00:19 np0005534696 kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 04:00:19 np0005534696 kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 04:00:19 np0005534696 kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 04:00:19 np0005534696 kernel: ACPI: FACS 0x000000007FFDFC80 000040
Nov 25 04:00:19 np0005534696 kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 04:00:19 np0005534696 kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 04:00:19 np0005534696 kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 04:00:19 np0005534696 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Nov 25 04:00:19 np0005534696 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Nov 25 04:00:19 np0005534696 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Nov 25 04:00:19 np0005534696 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Nov 25 04:00:19 np0005534696 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Nov 25 04:00:19 np0005534696 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Nov 25 04:00:19 np0005534696 kernel: No NUMA configuration found
Nov 25 04:00:19 np0005534696 kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Nov 25 04:00:19 np0005534696 kernel: NODE_DATA(0) allocated [mem 0x27ffd5000-0x27fffffff]
Nov 25 04:00:19 np0005534696 kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Nov 25 04:00:19 np0005534696 kernel: Zone ranges:
Nov 25 04:00:19 np0005534696 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 04:00:19 np0005534696 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 04:00:19 np0005534696 kernel:  Normal   [mem 0x0000000100000000-0x000000027fffffff]
Nov 25 04:00:19 np0005534696 kernel:  Device   empty
Nov 25 04:00:19 np0005534696 kernel: Movable zone start for each node
Nov 25 04:00:19 np0005534696 kernel: Early memory node ranges
Nov 25 04:00:19 np0005534696 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 04:00:19 np0005534696 kernel:  node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Nov 25 04:00:19 np0005534696 kernel:  node   0: [mem 0x0000000100000000-0x000000027fffffff]
Nov 25 04:00:19 np0005534696 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Nov 25 04:00:19 np0005534696 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 04:00:19 np0005534696 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 04:00:19 np0005534696 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 04:00:19 np0005534696 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 04:00:19 np0005534696 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 04:00:19 np0005534696 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 04:00:19 np0005534696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 04:00:19 np0005534696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 04:00:19 np0005534696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 04:00:19 np0005534696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 04:00:19 np0005534696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 04:00:19 np0005534696 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 04:00:19 np0005534696 kernel: TSC deadline timer available
Nov 25 04:00:19 np0005534696 kernel: CPU topo: Max. logical packages:   4
Nov 25 04:00:19 np0005534696 kernel: CPU topo: Max. logical dies:       4
Nov 25 04:00:19 np0005534696 kernel: CPU topo: Max. dies per package:   1
Nov 25 04:00:19 np0005534696 kernel: CPU topo: Max. threads per core:   1
Nov 25 04:00:19 np0005534696 kernel: CPU topo: Num. cores per package:     1
Nov 25 04:00:19 np0005534696 kernel: CPU topo: Num. threads per package:   1
Nov 25 04:00:19 np0005534696 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Nov 25 04:00:19 np0005534696 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 04:00:19 np0005534696 kernel: kvm-guest: KVM setup pv remote TLB flush
Nov 25 04:00:19 np0005534696 kernel: kvm-guest: setup PV sched yield
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 04:00:19 np0005534696 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 04:00:19 np0005534696 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Nov 25 04:00:19 np0005534696 kernel: Booting paravirtualized kernel on KVM
Nov 25 04:00:19 np0005534696 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 04:00:19 np0005534696 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Nov 25 04:00:19 np0005534696 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Nov 25 04:00:19 np0005534696 kernel: kvm-guest: PV spinlocks enabled
Nov 25 04:00:19 np0005534696 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 04:00:19 np0005534696 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 04:00:19 np0005534696 kernel: random: crng init done
Nov 25 04:00:19 np0005534696 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: Fallback order for Node 0: 0 
Nov 25 04:00:19 np0005534696 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 04:00:19 np0005534696 kernel: Policy zone: Normal
Nov 25 04:00:19 np0005534696 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 04:00:19 np0005534696 kernel: software IO TLB: area num 4.
Nov 25 04:00:19 np0005534696 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Nov 25 04:00:19 np0005534696 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 04:00:19 np0005534696 kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 04:00:19 np0005534696 kernel: Dynamic Preempt: voluntary
Nov 25 04:00:19 np0005534696 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 04:00:19 np0005534696 kernel: rcu: #011RCU event tracing is enabled.
Nov 25 04:00:19 np0005534696 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Nov 25 04:00:19 np0005534696 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 25 04:00:19 np0005534696 kernel: #011Rude variant of Tasks RCU enabled.
Nov 25 04:00:19 np0005534696 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 25 04:00:19 np0005534696 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 04:00:19 np0005534696 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Nov 25 04:00:19 np0005534696 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 25 04:00:19 np0005534696 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 25 04:00:19 np0005534696 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 25 04:00:19 np0005534696 kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Nov 25 04:00:19 np0005534696 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 04:00:19 np0005534696 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 04:00:19 np0005534696 kernel: Console: colour VGA+ 80x25
Nov 25 04:00:19 np0005534696 kernel: printk: console [ttyS0] enabled
Nov 25 04:00:19 np0005534696 kernel: ACPI: Core revision 20230331
Nov 25 04:00:19 np0005534696 kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 04:00:19 np0005534696 kernel: x2apic enabled
Nov 25 04:00:19 np0005534696 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 04:00:19 np0005534696 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Nov 25 04:00:19 np0005534696 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Nov 25 04:00:19 np0005534696 kernel: kvm-guest: setup PV IPIs
Nov 25 04:00:19 np0005534696 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 04:00:19 np0005534696 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Nov 25 04:00:19 np0005534696 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 04:00:19 np0005534696 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 04:00:19 np0005534696 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 04:00:19 np0005534696 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 04:00:19 np0005534696 kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 04:00:19 np0005534696 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 04:00:19 np0005534696 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Nov 25 04:00:19 np0005534696 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 04:00:19 np0005534696 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 04:00:19 np0005534696 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 04:00:19 np0005534696 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 04:00:19 np0005534696 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 04:00:19 np0005534696 kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Nov 25 04:00:19 np0005534696 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 04:00:19 np0005534696 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 04:00:19 np0005534696 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 04:00:19 np0005534696 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Nov 25 04:00:19 np0005534696 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 04:00:19 np0005534696 kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Nov 25 04:00:19 np0005534696 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Nov 25 04:00:19 np0005534696 kernel: Freeing SMP alternatives memory: 40K
Nov 25 04:00:19 np0005534696 kernel: pid_max: default: 32768 minimum: 301
Nov 25 04:00:19 np0005534696 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 04:00:19 np0005534696 kernel: landlock: Up and running.
Nov 25 04:00:19 np0005534696 kernel: Yama: becoming mindful.
Nov 25 04:00:19 np0005534696 kernel: SELinux:  Initializing.
Nov 25 04:00:19 np0005534696 kernel: LSM support for eBPF active
Nov 25 04:00:19 np0005534696 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Nov 25 04:00:19 np0005534696 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 04:00:19 np0005534696 kernel: ... version:                0
Nov 25 04:00:19 np0005534696 kernel: ... bit width:              48
Nov 25 04:00:19 np0005534696 kernel: ... generic registers:      6
Nov 25 04:00:19 np0005534696 kernel: ... value mask:             0000ffffffffffff
Nov 25 04:00:19 np0005534696 kernel: ... max period:             00007fffffffffff
Nov 25 04:00:19 np0005534696 kernel: ... fixed-purpose events:   0
Nov 25 04:00:19 np0005534696 kernel: ... event mask:             000000000000003f
Nov 25 04:00:19 np0005534696 kernel: signal: max sigframe size: 3376
Nov 25 04:00:19 np0005534696 kernel: rcu: Hierarchical SRCU implementation.
Nov 25 04:00:19 np0005534696 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 25 04:00:19 np0005534696 kernel: smp: Bringing up secondary CPUs ...
Nov 25 04:00:19 np0005534696 kernel: smpboot: x86: Booting SMP configuration:
Nov 25 04:00:19 np0005534696 kernel: .... node  #0, CPUs:      #1 #2 #3
Nov 25 04:00:19 np0005534696 kernel: smp: Brought up 1 node, 4 CPUs
Nov 25 04:00:19 np0005534696 kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Nov 25 04:00:19 np0005534696 kernel: node 0 deferred pages initialised in 10ms
Nov 25 04:00:19 np0005534696 kernel: Memory: 7778908K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 604512K reserved, 0K cma-reserved)
Nov 25 04:00:19 np0005534696 kernel: devtmpfs: initialized
Nov 25 04:00:19 np0005534696 kernel: x86/mm: Memory block size: 128MB
Nov 25 04:00:19 np0005534696 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 04:00:19 np0005534696 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 04:00:19 np0005534696 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 04:00:19 np0005534696 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 04:00:19 np0005534696 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 04:00:19 np0005534696 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 04:00:19 np0005534696 kernel: audit: initializing netlink subsys (disabled)
Nov 25 04:00:19 np0005534696 kernel: audit: type=2000 audit(1764061219.080:1): state=initialized audit_enabled=0 res=1
Nov 25 04:00:19 np0005534696 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 04:00:19 np0005534696 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 04:00:19 np0005534696 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 04:00:19 np0005534696 kernel: cpuidle: using governor menu
Nov 25 04:00:19 np0005534696 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 04:00:19 np0005534696 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Nov 25 04:00:19 np0005534696 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Nov 25 04:00:19 np0005534696 kernel: PCI: Using configuration type 1 for base access
Nov 25 04:00:19 np0005534696 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 04:00:19 np0005534696 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 04:00:19 np0005534696 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 04:00:19 np0005534696 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 04:00:19 np0005534696 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 04:00:19 np0005534696 kernel: Demotion targets for Node 0: null
Nov 25 04:00:19 np0005534696 kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 04:00:19 np0005534696 kernel: ACPI: Added _OSI(Module Device)
Nov 25 04:00:19 np0005534696 kernel: ACPI: Added _OSI(Processor Device)
Nov 25 04:00:19 np0005534696 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 04:00:19 np0005534696 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 04:00:19 np0005534696 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 04:00:19 np0005534696 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 04:00:19 np0005534696 kernel: ACPI: Interpreter enabled
Nov 25 04:00:19 np0005534696 kernel: ACPI: PM: (supports S0 S5)
Nov 25 04:00:19 np0005534696 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 04:00:19 np0005534696 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 04:00:19 np0005534696 kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 04:00:19 np0005534696 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 04:00:19 np0005534696 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 04:00:19 np0005534696 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Nov 25 04:00:19 np0005534696 kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Nov 25 04:00:19 np0005534696 kernel: PCI host bridge to bus 0000:00
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Nov 25 04:00:19 np0005534696 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:02: extended config space not accessible
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [1] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [2] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [3] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [4] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [5] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [6] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [7] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [8] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [9] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [10] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [11] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [12] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [13] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [14] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [15] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [16] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [17] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [18] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [19] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [20] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [21] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [22] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [23] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [24] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [25] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [26] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [27] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [28] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [29] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [30] registered
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [31] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 04:00:19 np0005534696 kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-2] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 25 04:00:19 np0005534696 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-3] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Nov 25 04:00:19 np0005534696 kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-4] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Nov 25 04:00:19 np0005534696 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-5] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Nov 25 04:00:19 np0005534696 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-6] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-7] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-8] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-9] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-10] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-11] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-12] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-13] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-14] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-15] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-16] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 25 04:00:19 np0005534696 kernel: acpiphp: Slot [0-17] registered
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Nov 25 04:00:19 np0005534696 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Nov 25 04:00:19 np0005534696 kernel: iommu: Default domain type: Translated
Nov 25 04:00:19 np0005534696 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 04:00:19 np0005534696 kernel: SCSI subsystem initialized
Nov 25 04:00:19 np0005534696 kernel: ACPI: bus type USB registered
Nov 25 04:00:19 np0005534696 kernel: usbcore: registered new interface driver usbfs
Nov 25 04:00:19 np0005534696 kernel: usbcore: registered new interface driver hub
Nov 25 04:00:19 np0005534696 kernel: usbcore: registered new device driver usb
Nov 25 04:00:19 np0005534696 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 04:00:19 np0005534696 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 04:00:19 np0005534696 kernel: PTP clock support registered
Nov 25 04:00:19 np0005534696 kernel: EDAC MC: Ver: 3.0.0
Nov 25 04:00:19 np0005534696 kernel: NetLabel: Initializing
Nov 25 04:00:19 np0005534696 kernel: NetLabel:  domain hash size = 128
Nov 25 04:00:19 np0005534696 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 04:00:19 np0005534696 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 04:00:19 np0005534696 kernel: PCI: Using ACPI for IRQ routing
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 04:00:19 np0005534696 kernel: vgaarb: loaded
Nov 25 04:00:19 np0005534696 kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 04:00:19 np0005534696 kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 04:00:19 np0005534696 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 04:00:19 np0005534696 kernel: pnp: PnP ACPI init
Nov 25 04:00:19 np0005534696 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Nov 25 04:00:19 np0005534696 kernel: pnp: PnP ACPI: found 5 devices
Nov 25 04:00:19 np0005534696 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 04:00:19 np0005534696 kernel: NET: Registered PF_INET protocol family
Nov 25 04:00:19 np0005534696 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 04:00:19 np0005534696 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 04:00:19 np0005534696 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 04:00:19 np0005534696 kernel: NET: Registered PF_XDP protocol family
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Nov 25 04:00:19 np0005534696 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Nov 25 04:00:19 np0005534696 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Nov 25 04:00:19 np0005534696 kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Nov 25 04:00:19 np0005534696 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Nov 25 04:00:19 np0005534696 kernel: PCI: CLS 0 bytes, default 64
Nov 25 04:00:19 np0005534696 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 04:00:19 np0005534696 kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Nov 25 04:00:19 np0005534696 kernel: Trying to unpack rootfs image as initramfs...
Nov 25 04:00:19 np0005534696 kernel: ACPI: bus type thunderbolt registered
Nov 25 04:00:19 np0005534696 kernel: Initialise system trusted keyrings
Nov 25 04:00:19 np0005534696 kernel: Key type blacklist registered
Nov 25 04:00:19 np0005534696 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 04:00:19 np0005534696 kernel: zbud: loaded
Nov 25 04:00:19 np0005534696 kernel: integrity: Platform Keyring initialized
Nov 25 04:00:19 np0005534696 kernel: integrity: Machine keyring initialized
Nov 25 04:00:19 np0005534696 kernel: Freeing initrd memory: 75160K
Nov 25 04:00:19 np0005534696 kernel: NET: Registered PF_ALG protocol family
Nov 25 04:00:19 np0005534696 kernel: xor: automatically using best checksumming function   avx       
Nov 25 04:00:19 np0005534696 kernel: Key type asymmetric registered
Nov 25 04:00:19 np0005534696 kernel: Asymmetric key parser 'x509' registered
Nov 25 04:00:19 np0005534696 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 04:00:19 np0005534696 kernel: io scheduler mq-deadline registered
Nov 25 04:00:19 np0005534696 kernel: io scheduler kyber registered
Nov 25 04:00:19 np0005534696 kernel: io scheduler bfq registered
Nov 25 04:00:19 np0005534696 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Nov 25 04:00:19 np0005534696 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Nov 25 04:00:19 np0005534696 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Nov 25 04:00:19 np0005534696 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Nov 25 04:00:19 np0005534696 kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Nov 25 04:00:19 np0005534696 kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Nov 25 04:00:19 np0005534696 kernel: shpchp 0000:01:00.0: Slot initialization failed
Nov 25 04:00:19 np0005534696 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 04:00:19 np0005534696 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 04:00:19 np0005534696 kernel: ACPI: button: Power Button [PWRF]
Nov 25 04:00:19 np0005534696 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Nov 25 04:00:19 np0005534696 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 04:00:19 np0005534696 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 04:00:19 np0005534696 kernel: Non-volatile memory driver v1.3
Nov 25 04:00:19 np0005534696 kernel: rdac: device handler registered
Nov 25 04:00:19 np0005534696 kernel: hp_sw: device handler registered
Nov 25 04:00:19 np0005534696 kernel: emc: device handler registered
Nov 25 04:00:19 np0005534696 kernel: alua: device handler registered
Nov 25 04:00:19 np0005534696 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Nov 25 04:00:19 np0005534696 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Nov 25 04:00:19 np0005534696 kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Nov 25 04:00:19 np0005534696 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Nov 25 04:00:19 np0005534696 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 04:00:19 np0005534696 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 04:00:19 np0005534696 kernel: usb usb1: Product: UHCI Host Controller
Nov 25 04:00:19 np0005534696 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 04:00:19 np0005534696 kernel: usb usb1: SerialNumber: 0000:02:01.0
Nov 25 04:00:19 np0005534696 kernel: hub 1-0:1.0: USB hub found
Nov 25 04:00:19 np0005534696 kernel: hub 1-0:1.0: 2 ports detected
Nov 25 04:00:19 np0005534696 kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 04:00:19 np0005534696 kernel: usbserial: USB Serial support registered for generic
Nov 25 04:00:19 np0005534696 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 04:00:19 np0005534696 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 04:00:19 np0005534696 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 04:00:19 np0005534696 kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 04:00:19 np0005534696 kernel: rtc_cmos 00:03: RTC can wake from S4
Nov 25 04:00:19 np0005534696 kernel: rtc_cmos 00:03: registered as rtc0
Nov 25 04:00:19 np0005534696 kernel: rtc_cmos 00:03: setting system clock to 2025-11-25T09:00:19 UTC (1764061219)
Nov 25 04:00:19 np0005534696 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Nov 25 04:00:19 np0005534696 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 04:00:19 np0005534696 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 04:00:19 np0005534696 kernel: usbcore: registered new interface driver usbhid
Nov 25 04:00:19 np0005534696 kernel: usbhid: USB HID core driver
Nov 25 04:00:19 np0005534696 kernel: drop_monitor: Initializing network drop monitor service
Nov 25 04:00:19 np0005534696 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 04:00:19 np0005534696 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 04:00:19 np0005534696 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 04:00:19 np0005534696 kernel: Initializing XFRM netlink socket
Nov 25 04:00:19 np0005534696 kernel: NET: Registered PF_INET6 protocol family
Nov 25 04:00:19 np0005534696 kernel: Segment Routing with IPv6
Nov 25 04:00:19 np0005534696 kernel: NET: Registered PF_PACKET protocol family
Nov 25 04:00:19 np0005534696 kernel: mpls_gso: MPLS GSO support
Nov 25 04:00:19 np0005534696 kernel: IPI shorthand broadcast: enabled
Nov 25 04:00:19 np0005534696 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 04:00:19 np0005534696 kernel: AES CTR mode by8 optimization enabled
Nov 25 04:00:19 np0005534696 kernel: sched_clock: Marking stable (997003226, 146949275)->(1387207045, -243254544)
Nov 25 04:00:19 np0005534696 kernel: registered taskstats version 1
Nov 25 04:00:19 np0005534696 kernel: Loading compiled-in X.509 certificates
Nov 25 04:00:19 np0005534696 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 04:00:19 np0005534696 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 04:00:19 np0005534696 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 04:00:19 np0005534696 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 04:00:19 np0005534696 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 04:00:19 np0005534696 kernel: Demotion targets for Node 0: null
Nov 25 04:00:19 np0005534696 kernel: page_owner is disabled
Nov 25 04:00:19 np0005534696 kernel: Key type .fscrypt registered
Nov 25 04:00:19 np0005534696 kernel: Key type fscrypt-provisioning registered
Nov 25 04:00:19 np0005534696 kernel: Key type big_key registered
Nov 25 04:00:19 np0005534696 kernel: Key type encrypted registered
Nov 25 04:00:19 np0005534696 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 04:00:19 np0005534696 kernel: Loading compiled-in module X.509 certificates
Nov 25 04:00:19 np0005534696 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 04:00:19 np0005534696 kernel: ima: Allocated hash algorithm: sha256
Nov 25 04:00:19 np0005534696 kernel: ima: No architecture policies found
Nov 25 04:00:19 np0005534696 kernel: evm: Initialising EVM extended attributes:
Nov 25 04:00:19 np0005534696 kernel: evm: security.selinux
Nov 25 04:00:19 np0005534696 kernel: evm: security.SMACK64 (disabled)
Nov 25 04:00:19 np0005534696 kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 04:00:19 np0005534696 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 04:00:19 np0005534696 kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 04:00:19 np0005534696 kernel: evm: security.apparmor (disabled)
Nov 25 04:00:19 np0005534696 kernel: evm: security.ima
Nov 25 04:00:19 np0005534696 kernel: evm: security.capability
Nov 25 04:00:19 np0005534696 kernel: evm: HMAC attrs: 0x1
Nov 25 04:00:19 np0005534696 kernel: Running certificate verification RSA selftest
Nov 25 04:00:19 np0005534696 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 04:00:19 np0005534696 kernel: Running certificate verification ECDSA selftest
Nov 25 04:00:19 np0005534696 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 04:00:19 np0005534696 kernel: clk: Disabling unused clocks
Nov 25 04:00:19 np0005534696 kernel: Freeing unused decrypted memory: 2028K
Nov 25 04:00:19 np0005534696 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 04:00:19 np0005534696 kernel: Write protecting the kernel read-only data: 30720k
Nov 25 04:00:19 np0005534696 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 04:00:19 np0005534696 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 04:00:19 np0005534696 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 04:00:19 np0005534696 kernel: Run /init as init process
Nov 25 04:00:19 np0005534696 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 04:00:19 np0005534696 systemd: Detected virtualization kvm.
Nov 25 04:00:19 np0005534696 systemd: Detected architecture x86-64.
Nov 25 04:00:19 np0005534696 systemd: Running in initrd.
Nov 25 04:00:19 np0005534696 systemd: No hostname configured, using default hostname.
Nov 25 04:00:19 np0005534696 systemd: Hostname set to <localhost>.
Nov 25 04:00:19 np0005534696 systemd: Initializing machine ID from VM UUID.
Nov 25 04:00:19 np0005534696 systemd: Queued start job for default target Initrd Default Target.
Nov 25 04:00:19 np0005534696 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 04:00:19 np0005534696 systemd: Reached target Local Encrypted Volumes.
Nov 25 04:00:19 np0005534696 systemd: Reached target Initrd /usr File System.
Nov 25 04:00:19 np0005534696 systemd: Reached target Local File Systems.
Nov 25 04:00:19 np0005534696 systemd: Reached target Path Units.
Nov 25 04:00:19 np0005534696 systemd: Reached target Slice Units.
Nov 25 04:00:19 np0005534696 systemd: Reached target Swaps.
Nov 25 04:00:19 np0005534696 systemd: Reached target Timer Units.
Nov 25 04:00:19 np0005534696 systemd: Listening on D-Bus System Message Bus Socket.
Nov 25 04:00:19 np0005534696 systemd: Listening on Journal Socket (/dev/log).
Nov 25 04:00:19 np0005534696 systemd: Listening on Journal Socket.
Nov 25 04:00:19 np0005534696 systemd: Listening on udev Control Socket.
Nov 25 04:00:19 np0005534696 systemd: Listening on udev Kernel Socket.
Nov 25 04:00:19 np0005534696 systemd: Reached target Socket Units.
Nov 25 04:00:19 np0005534696 systemd: Starting Create List of Static Device Nodes...
Nov 25 04:00:19 np0005534696 systemd: Starting Journal Service...
Nov 25 04:00:19 np0005534696 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 04:00:19 np0005534696 systemd: Starting Apply Kernel Variables...
Nov 25 04:00:19 np0005534696 systemd: Starting Create System Users...
Nov 25 04:00:19 np0005534696 systemd: Starting Setup Virtual Console...
Nov 25 04:00:19 np0005534696 systemd: Finished Create List of Static Device Nodes.
Nov 25 04:00:19 np0005534696 systemd: Finished Apply Kernel Variables.
Nov 25 04:00:19 np0005534696 systemd: Finished Create System Users.
Nov 25 04:00:19 np0005534696 systemd: Starting Create Static Device Nodes in /dev...
Nov 25 04:00:19 np0005534696 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 04:00:19 np0005534696 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 04:00:19 np0005534696 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 04:00:19 np0005534696 kernel: usb 1-1: Manufacturer: QEMU
Nov 25 04:00:19 np0005534696 kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Nov 25 04:00:19 np0005534696 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 04:00:19 np0005534696 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Nov 25 04:00:19 np0005534696 systemd: Finished Create Static Device Nodes in /dev.
Nov 25 04:00:19 np0005534696 systemd-journald[280]: Journal started
Nov 25 04:00:19 np0005534696 systemd-journald[280]: Runtime Journal (/run/log/journal/24892f3205154f8a815cb9b89f76dd8c) is 8.0M, max 153.6M, 145.6M free.
Nov 25 04:00:19 np0005534696 systemd-sysusers[283]: Creating group 'users' with GID 100.
Nov 25 04:00:19 np0005534696 systemd-sysusers[283]: Creating group 'dbus' with GID 81.
Nov 25 04:00:19 np0005534696 systemd-sysusers[283]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 04:00:19 np0005534696 systemd: Started Journal Service.
Nov 25 04:00:20 np0005534696 systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 04:00:20 np0005534696 systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 04:00:20 np0005534696 systemd[1]: Finished Setup Virtual Console.
Nov 25 04:00:20 np0005534696 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 04:00:20 np0005534696 systemd[1]: Starting dracut cmdline hook...
Nov 25 04:00:20 np0005534696 dracut-cmdline[298]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 04:00:20 np0005534696 dracut-cmdline[298]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 04:00:20 np0005534696 systemd[1]: Finished dracut cmdline hook.
Nov 25 04:00:20 np0005534696 systemd[1]: Starting dracut pre-udev hook...
Nov 25 04:00:20 np0005534696 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 04:00:20 np0005534696 kernel: device-mapper: uevent: version 1.0.3
Nov 25 04:00:20 np0005534696 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 04:00:20 np0005534696 kernel: RPC: Registered named UNIX socket transport module.
Nov 25 04:00:20 np0005534696 kernel: RPC: Registered udp transport module.
Nov 25 04:00:20 np0005534696 kernel: RPC: Registered tcp transport module.
Nov 25 04:00:20 np0005534696 kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 04:00:20 np0005534696 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 04:00:20 np0005534696 rpc.statd[414]: Version 2.5.4 starting
Nov 25 04:00:20 np0005534696 rpc.statd[414]: Initializing NSM state
Nov 25 04:00:20 np0005534696 rpc.idmapd[419]: Setting log level to 0
Nov 25 04:00:20 np0005534696 systemd[1]: Finished dracut pre-udev hook.
Nov 25 04:00:20 np0005534696 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 04:00:20 np0005534696 systemd-udevd[432]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 04:00:20 np0005534696 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 04:00:20 np0005534696 systemd[1]: Starting dracut pre-trigger hook...
Nov 25 04:00:20 np0005534696 systemd[1]: Finished dracut pre-trigger hook.
Nov 25 04:00:20 np0005534696 systemd[1]: Starting Coldplug All udev Devices...
Nov 25 04:00:20 np0005534696 systemd[1]: Created slice Slice /system/modprobe.
Nov 25 04:00:20 np0005534696 systemd[1]: Starting Load Kernel Module configfs...
Nov 25 04:00:20 np0005534696 systemd[1]: Finished Coldplug All udev Devices.
Nov 25 04:00:20 np0005534696 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 04:00:20 np0005534696 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 04:00:20 np0005534696 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 04:00:20 np0005534696 systemd[1]: Reached target Network.
Nov 25 04:00:20 np0005534696 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 04:00:20 np0005534696 systemd[1]: Starting dracut initqueue hook...
Nov 25 04:00:20 np0005534696 kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Nov 25 04:00:20 np0005534696 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 04:00:20 np0005534696 kernel: vda: vda1
Nov 25 04:00:20 np0005534696 systemd-udevd[438]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:00:20 np0005534696 systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 04:00:20 np0005534696 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Nov 25 04:00:20 np0005534696 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Nov 25 04:00:20 np0005534696 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Nov 25 04:00:20 np0005534696 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Nov 25 04:00:20 np0005534696 kernel: scsi host0: ahci
Nov 25 04:00:20 np0005534696 kernel: scsi host1: ahci
Nov 25 04:00:20 np0005534696 kernel: scsi host2: ahci
Nov 25 04:00:20 np0005534696 kernel: scsi host3: ahci
Nov 25 04:00:20 np0005534696 kernel: scsi host4: ahci
Nov 25 04:00:20 np0005534696 kernel: scsi host5: ahci
Nov 25 04:00:20 np0005534696 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Nov 25 04:00:20 np0005534696 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Nov 25 04:00:20 np0005534696 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Nov 25 04:00:20 np0005534696 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Nov 25 04:00:20 np0005534696 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Nov 25 04:00:20 np0005534696 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Nov 25 04:00:20 np0005534696 systemd[1]: Reached target Initrd Root Device.
Nov 25 04:00:20 np0005534696 systemd[1]: Mounting Kernel Configuration File System...
Nov 25 04:00:20 np0005534696 systemd[1]: Mounted Kernel Configuration File System.
Nov 25 04:00:20 np0005534696 systemd[1]: Reached target System Initialization.
Nov 25 04:00:20 np0005534696 systemd[1]: Reached target Basic System.
Nov 25 04:00:20 np0005534696 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Nov 25 04:00:20 np0005534696 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Nov 25 04:00:20 np0005534696 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Nov 25 04:00:20 np0005534696 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Nov 25 04:00:20 np0005534696 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Nov 25 04:00:20 np0005534696 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 04:00:20 np0005534696 kernel: ata1.00: applying bridge limits
Nov 25 04:00:20 np0005534696 kernel: ata1.00: configured for UDMA/100
Nov 25 04:00:20 np0005534696 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Nov 25 04:00:20 np0005534696 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 04:00:20 np0005534696 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 04:00:20 np0005534696 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 04:00:21 np0005534696 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 04:00:21 np0005534696 systemd[1]: Finished dracut initqueue hook.
Nov 25 04:00:21 np0005534696 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 04:00:21 np0005534696 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 04:00:21 np0005534696 systemd[1]: Reached target Remote File Systems.
Nov 25 04:00:21 np0005534696 systemd[1]: Starting dracut pre-mount hook...
Nov 25 04:00:21 np0005534696 systemd[1]: Finished dracut pre-mount hook.
Nov 25 04:00:21 np0005534696 systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 25 04:00:21 np0005534696 systemd-fsck[525]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 04:00:21 np0005534696 systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 04:00:21 np0005534696 systemd[1]: Mounting /sysroot...
Nov 25 04:00:21 np0005534696 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 04:00:21 np0005534696 kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 25 04:00:21 np0005534696 kernel: XFS (vda1): Ending clean mount
Nov 25 04:00:21 np0005534696 systemd[1]: Mounted /sysroot.
Nov 25 04:00:21 np0005534696 systemd[1]: Reached target Initrd Root File System.
Nov 25 04:00:21 np0005534696 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 04:00:21 np0005534696 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 04:00:21 np0005534696 systemd[1]: Reached target Initrd File Systems.
Nov 25 04:00:21 np0005534696 systemd[1]: Reached target Initrd Default Target.
Nov 25 04:00:21 np0005534696 systemd[1]: Starting dracut mount hook...
Nov 25 04:00:21 np0005534696 systemd[1]: Finished dracut mount hook.
Nov 25 04:00:21 np0005534696 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 04:00:21 np0005534696 rpc.idmapd[419]: exiting on signal 15
Nov 25 04:00:21 np0005534696 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 04:00:21 np0005534696 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Network.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Timer Units.
Nov 25 04:00:21 np0005534696 systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 04:00:21 np0005534696 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Initrd Default Target.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Basic System.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Initrd Root Device.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Initrd /usr File System.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Path Units.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Remote File Systems.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Slice Units.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Socket Units.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target System Initialization.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Local File Systems.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Swaps.
Nov 25 04:00:21 np0005534696 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped dracut mount hook.
Nov 25 04:00:21 np0005534696 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped dracut pre-mount hook.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 04:00:21 np0005534696 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 04:00:21 np0005534696 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped dracut initqueue hook.
Nov 25 04:00:21 np0005534696 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 04:00:21 np0005534696 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 04:00:21 np0005534696 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 04:00:21 np0005534696 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 04:00:21 np0005534696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped Setup Virtual Console.
Nov 25 04:00:21 np0005534696 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 04:00:21 np0005534696 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 04:00:21 np0005534696 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Closed udev Control Socket.
Nov 25 04:00:21 np0005534696 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Closed udev Kernel Socket.
Nov 25 04:00:21 np0005534696 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped dracut pre-udev hook.
Nov 25 04:00:21 np0005534696 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped dracut cmdline hook.
Nov 25 04:00:21 np0005534696 systemd[1]: Starting Cleanup udev Database...
Nov 25 04:00:21 np0005534696 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 04:00:21 np0005534696 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 04:00:21 np0005534696 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Stopped Create System Users.
Nov 25 04:00:21 np0005534696 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 04:00:21 np0005534696 systemd[1]: Finished Cleanup udev Database.
Nov 25 04:00:21 np0005534696 systemd[1]: Reached target Switch Root.
Nov 25 04:00:21 np0005534696 systemd[1]: Starting Switch Root...
Nov 25 04:00:21 np0005534696 systemd[1]: Switching root.
Nov 25 04:00:21 np0005534696 systemd-journald[280]: Received SIGTERM from PID 1 (systemd).
Nov 25 04:00:21 np0005534696 systemd-journald[280]: Journal stopped
Nov 25 04:00:22 np0005534696 kernel: audit: type=1404 audit(1764061221.767:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 25 04:00:22 np0005534696 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 04:00:22 np0005534696 kernel: SELinux:  policy capability open_perms=1
Nov 25 04:00:22 np0005534696 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 04:00:22 np0005534696 kernel: SELinux:  policy capability always_check_network=0
Nov 25 04:00:22 np0005534696 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 04:00:22 np0005534696 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 04:00:22 np0005534696 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 04:00:22 np0005534696 kernel: audit: type=1403 audit(1764061221.871:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 25 04:00:22 np0005534696 systemd: Successfully loaded SELinux policy in 106.986ms.
Nov 25 04:00:22 np0005534696 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.082ms.
Nov 25 04:00:22 np0005534696 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 04:00:22 np0005534696 systemd: Detected virtualization kvm.
Nov 25 04:00:22 np0005534696 systemd: Detected architecture x86-64.
Nov 25 04:00:22 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:00:22 np0005534696 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 25 04:00:22 np0005534696 systemd: Stopped Switch Root.
Nov 25 04:00:22 np0005534696 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 25 04:00:22 np0005534696 systemd: Created slice Slice /system/getty.
Nov 25 04:00:22 np0005534696 systemd: Created slice Slice /system/serial-getty.
Nov 25 04:00:22 np0005534696 systemd: Created slice Slice /system/sshd-keygen.
Nov 25 04:00:22 np0005534696 systemd: Created slice User and Session Slice.
Nov 25 04:00:22 np0005534696 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 04:00:22 np0005534696 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 25 04:00:22 np0005534696 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 25 04:00:22 np0005534696 systemd: Reached target Local Encrypted Volumes.
Nov 25 04:00:22 np0005534696 systemd: Stopped target Switch Root.
Nov 25 04:00:22 np0005534696 systemd: Stopped target Initrd File Systems.
Nov 25 04:00:22 np0005534696 systemd: Stopped target Initrd Root File System.
Nov 25 04:00:22 np0005534696 systemd: Reached target Local Integrity Protected Volumes.
Nov 25 04:00:22 np0005534696 systemd: Reached target Path Units.
Nov 25 04:00:22 np0005534696 systemd: Reached target rpc_pipefs.target.
Nov 25 04:00:22 np0005534696 systemd: Reached target Slice Units.
Nov 25 04:00:22 np0005534696 systemd: Reached target Swaps.
Nov 25 04:00:22 np0005534696 systemd: Reached target Local Verity Protected Volumes.
Nov 25 04:00:22 np0005534696 systemd: Listening on RPCbind Server Activation Socket.
Nov 25 04:00:22 np0005534696 systemd: Reached target RPC Port Mapper.
Nov 25 04:00:22 np0005534696 systemd: Listening on Process Core Dump Socket.
Nov 25 04:00:22 np0005534696 systemd: Listening on initctl Compatibility Named Pipe.
Nov 25 04:00:22 np0005534696 systemd: Listening on udev Control Socket.
Nov 25 04:00:22 np0005534696 systemd: Listening on udev Kernel Socket.
Nov 25 04:00:22 np0005534696 systemd: Mounting Huge Pages File System...
Nov 25 04:00:22 np0005534696 systemd: Mounting POSIX Message Queue File System...
Nov 25 04:00:22 np0005534696 systemd: Mounting Kernel Debug File System...
Nov 25 04:00:22 np0005534696 systemd: Mounting Kernel Trace File System...
Nov 25 04:00:22 np0005534696 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 04:00:22 np0005534696 systemd: Starting Create List of Static Device Nodes...
Nov 25 04:00:22 np0005534696 systemd: Starting Load Kernel Module configfs...
Nov 25 04:00:22 np0005534696 systemd: Starting Load Kernel Module drm...
Nov 25 04:00:22 np0005534696 systemd: Starting Load Kernel Module efi_pstore...
Nov 25 04:00:22 np0005534696 systemd: Starting Load Kernel Module fuse...
Nov 25 04:00:22 np0005534696 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 25 04:00:22 np0005534696 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 25 04:00:22 np0005534696 systemd: Stopped File System Check on Root Device.
Nov 25 04:00:22 np0005534696 systemd: Stopped Journal Service.
Nov 25 04:00:22 np0005534696 systemd: Starting Journal Service...
Nov 25 04:00:22 np0005534696 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 04:00:22 np0005534696 systemd: Starting Generate network units from Kernel command line...
Nov 25 04:00:22 np0005534696 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 04:00:22 np0005534696 kernel: fuse: init (API version 7.37)
Nov 25 04:00:22 np0005534696 systemd: Starting Remount Root and Kernel File Systems...
Nov 25 04:00:22 np0005534696 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 25 04:00:22 np0005534696 systemd: Starting Apply Kernel Variables...
Nov 25 04:00:22 np0005534696 systemd: Starting Coldplug All udev Devices...
Nov 25 04:00:22 np0005534696 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 25 04:00:22 np0005534696 systemd: Mounted Huge Pages File System.
Nov 25 04:00:22 np0005534696 systemd: Mounted POSIX Message Queue File System.
Nov 25 04:00:22 np0005534696 systemd-journald[649]: Journal started
Nov 25 04:00:22 np0005534696 systemd-journald[649]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 04:00:22 np0005534696 systemd[1]: Queued start job for default target Multi-User System.
Nov 25 04:00:22 np0005534696 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 25 04:00:22 np0005534696 systemd: Started Journal Service.
Nov 25 04:00:22 np0005534696 systemd[1]: Mounted Kernel Debug File System.
Nov 25 04:00:22 np0005534696 systemd[1]: Mounted Kernel Trace File System.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 04:00:22 np0005534696 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 04:00:22 np0005534696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 25 04:00:22 np0005534696 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Load Kernel Module fuse.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Generate network units from Kernel command line.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Apply Kernel Variables.
Nov 25 04:00:22 np0005534696 kernel: ACPI: bus type drm_connector registered
Nov 25 04:00:22 np0005534696 systemd[1]: Mounting FUSE Control File System...
Nov 25 04:00:22 np0005534696 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Rebuild Hardware Database...
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 25 04:00:22 np0005534696 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Load/Save OS Random Seed...
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Create System Users...
Nov 25 04:00:22 np0005534696 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Load Kernel Module drm.
Nov 25 04:00:22 np0005534696 systemd-journald[649]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 04:00:22 np0005534696 systemd-journald[649]: Received client request to flush runtime journal.
Nov 25 04:00:22 np0005534696 systemd[1]: Mounted FUSE Control File System.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Load/Save OS Random Seed.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 25 04:00:22 np0005534696 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Create System Users.
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Coldplug All udev Devices.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 04:00:22 np0005534696 systemd[1]: Reached target Preparation for Local File Systems.
Nov 25 04:00:22 np0005534696 systemd[1]: Reached target Local File Systems.
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 25 04:00:22 np0005534696 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 25 04:00:22 np0005534696 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 25 04:00:22 np0005534696 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Automatic Boot Loader Update...
Nov 25 04:00:22 np0005534696 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 04:00:22 np0005534696 bootctl[666]: Couldn't find EFI system partition, skipping.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Automatic Boot Loader Update.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Security Auditing Service...
Nov 25 04:00:22 np0005534696 systemd[1]: Starting RPC Bind...
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Rebuild Journal Catalog...
Nov 25 04:00:22 np0005534696 auditd[672]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 25 04:00:22 np0005534696 auditd[672]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 25 04:00:22 np0005534696 systemd[1]: Started RPC Bind.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Rebuild Journal Catalog.
Nov 25 04:00:22 np0005534696 augenrules[677]: /sbin/augenrules: No change
Nov 25 04:00:22 np0005534696 augenrules[692]: No rules
Nov 25 04:00:22 np0005534696 augenrules[692]: enabled 1
Nov 25 04:00:22 np0005534696 augenrules[692]: failure 1
Nov 25 04:00:22 np0005534696 augenrules[692]: pid 672
Nov 25 04:00:22 np0005534696 augenrules[692]: rate_limit 0
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog_limit 8192
Nov 25 04:00:22 np0005534696 augenrules[692]: lost 0
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog 0
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog_wait_time 60000
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog_wait_time_actual 0
Nov 25 04:00:22 np0005534696 augenrules[692]: enabled 1
Nov 25 04:00:22 np0005534696 augenrules[692]: failure 1
Nov 25 04:00:22 np0005534696 augenrules[692]: pid 672
Nov 25 04:00:22 np0005534696 augenrules[692]: rate_limit 0
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog_limit 8192
Nov 25 04:00:22 np0005534696 augenrules[692]: lost 0
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog 0
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog_wait_time 60000
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog_wait_time_actual 0
Nov 25 04:00:22 np0005534696 augenrules[692]: enabled 1
Nov 25 04:00:22 np0005534696 augenrules[692]: failure 1
Nov 25 04:00:22 np0005534696 augenrules[692]: pid 672
Nov 25 04:00:22 np0005534696 augenrules[692]: rate_limit 0
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog_limit 8192
Nov 25 04:00:22 np0005534696 augenrules[692]: lost 0
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog 0
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog_wait_time 60000
Nov 25 04:00:22 np0005534696 augenrules[692]: backlog_wait_time_actual 0
Nov 25 04:00:22 np0005534696 systemd[1]: Started Security Auditing Service.
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Rebuild Hardware Database.
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Update is Completed...
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Update is Completed.
Nov 25 04:00:22 np0005534696 systemd-udevd[700]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 04:00:22 np0005534696 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 04:00:22 np0005534696 systemd[1]: Reached target System Initialization.
Nov 25 04:00:22 np0005534696 systemd[1]: Started dnf makecache --timer.
Nov 25 04:00:22 np0005534696 systemd[1]: Started Daily rotation of log files.
Nov 25 04:00:22 np0005534696 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 25 04:00:22 np0005534696 systemd[1]: Reached target Timer Units.
Nov 25 04:00:22 np0005534696 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 04:00:22 np0005534696 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 25 04:00:22 np0005534696 systemd[1]: Reached target Socket Units.
Nov 25 04:00:22 np0005534696 systemd[1]: Starting D-Bus System Message Bus...
Nov 25 04:00:22 np0005534696 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Load Kernel Module configfs...
Nov 25 04:00:22 np0005534696 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Load Kernel Module configfs.
Nov 25 04:00:22 np0005534696 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 25 04:00:22 np0005534696 systemd[1]: Started D-Bus System Message Bus.
Nov 25 04:00:22 np0005534696 systemd[1]: Reached target Basic System.
Nov 25 04:00:22 np0005534696 dbus-broker-lau[725]: Ready
Nov 25 04:00:22 np0005534696 systemd[1]: Starting NTP client/server...
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 25 04:00:22 np0005534696 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 25 04:00:22 np0005534696 systemd[1]: Starting IPv4 firewall with iptables...
Nov 25 04:00:22 np0005534696 systemd[1]: Started irqbalance daemon.
Nov 25 04:00:22 np0005534696 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 25 04:00:22 np0005534696 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 04:00:22 np0005534696 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 04:00:22 np0005534696 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 04:00:22 np0005534696 systemd[1]: Reached target sshd-keygen.target.
Nov 25 04:00:22 np0005534696 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 25 04:00:22 np0005534696 systemd[1]: Reached target User and Group Name Lookups.
Nov 25 04:00:22 np0005534696 systemd[1]: Starting User Login Management...
Nov 25 04:00:22 np0005534696 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 25 04:00:22 np0005534696 chronyd[746]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 04:00:22 np0005534696 chronyd[746]: Loaded 0 symmetric keys
Nov 25 04:00:22 np0005534696 chronyd[746]: Using right/UTC timezone to obtain leap second data
Nov 25 04:00:22 np0005534696 chronyd[746]: Loaded seccomp filter (level 2)
Nov 25 04:00:22 np0005534696 systemd[1]: Started NTP client/server.
Nov 25 04:00:22 np0005534696 systemd-logind[744]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 04:00:22 np0005534696 systemd-logind[744]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 04:00:22 np0005534696 systemd-udevd[723]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:00:23 np0005534696 systemd-logind[744]: New seat seat0.
Nov 25 04:00:23 np0005534696 systemd[1]: Started User Login Management.
Nov 25 04:00:23 np0005534696 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 25 04:00:23 np0005534696 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 25 04:00:23 np0005534696 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 25 04:00:23 np0005534696 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Nov 25 04:00:23 np0005534696 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Nov 25 04:00:23 np0005534696 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Nov 25 04:00:23 np0005534696 kernel: Console: switching to colour dummy device 80x25
Nov 25 04:00:23 np0005534696 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 25 04:00:23 np0005534696 kernel: [drm] features: -context_init
Nov 25 04:00:23 np0005534696 kernel: [drm] number of scanouts: 1
Nov 25 04:00:23 np0005534696 kernel: [drm] number of cap sets: 0
Nov 25 04:00:23 np0005534696 iptables.init[737]: iptables: Applying firewall rules: [  OK  ]
Nov 25 04:00:23 np0005534696 systemd[1]: Finished IPv4 firewall with iptables.
Nov 25 04:00:23 np0005534696 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Nov 25 04:00:23 np0005534696 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 25 04:00:23 np0005534696 kernel: Console: switching to colour frame buffer device 160x50
Nov 25 04:00:23 np0005534696 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 25 04:00:23 np0005534696 kernel: iTCO_vendor_support: vendor-support=0
Nov 25 04:00:23 np0005534696 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Nov 25 04:00:23 np0005534696 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 25 04:00:23 np0005534696 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 25 04:00:23 np0005534696 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Nov 25 04:00:23 np0005534696 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Nov 25 04:00:23 np0005534696 kernel: kvm_amd: TSC scaling supported
Nov 25 04:00:23 np0005534696 kernel: kvm_amd: Nested Virtualization enabled
Nov 25 04:00:23 np0005534696 kernel: kvm_amd: Nested Paging enabled
Nov 25 04:00:23 np0005534696 kernel: kvm_amd: LBR virtualization supported
Nov 25 04:00:23 np0005534696 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Nov 25 04:00:23 np0005534696 kernel: kvm_amd: Virtual GIF supported
Nov 25 04:00:23 np0005534696 cloud-init[791]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 25 Nov 2025 09:00:23 +0000. Up 4.84 seconds.
Nov 25 04:00:23 np0005534696 systemd[1]: run-cloud\x2dinit-tmp-tmpvzgo39v0.mount: Deactivated successfully.
Nov 25 04:00:23 np0005534696 systemd[1]: Starting Hostname Service...
Nov 25 04:00:23 np0005534696 systemd[1]: Started Hostname Service.
Nov 25 04:00:23 np0005534696 systemd-hostnamed[807]: Hostname set to <np0005534696> (static)
Nov 25 04:00:23 np0005534696 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 25 04:00:23 np0005534696 systemd[1]: Reached target Preparation for Network.
Nov 25 04:00:23 np0005534696 systemd[1]: Starting Network Manager...
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8079] NetworkManager (version 1.54.1-1.el9) is starting... (boot:64a5e7c0-2269-451f-b570-b292d4bfbc96)
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8084] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8184] manager[0x555dfbe4f080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8212] hostname: hostname: using hostnamed
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8212] hostname: static hostname changed from (none) to "np0005534696"
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8215] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8301] manager[0x555dfbe4f080]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8301] manager[0x555dfbe4f080]: rfkill: WWAN hardware radio set enabled
Nov 25 04:00:23 np0005534696 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8347] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8347] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8348] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8348] manager: Networking is enabled by state file
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8349] settings: Loaded settings plugin: keyfile (internal)
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8364] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8382] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8396] dhcp: init: Using DHCP client 'internal'
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8398] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8408] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8416] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8422] device (lo): Activation: starting connection 'lo' (82e74c3e-4080-479b-a643-546a3994c15a)
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8429] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8431] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8457] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8461] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8464] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8465] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8467] device (eth0): carrier: link connected
Nov 25 04:00:23 np0005534696 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8470] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8477] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8485] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8489] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8490] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8491] manager: NetworkManager state is now CONNECTING
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8492] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:00:23 np0005534696 systemd[1]: Started Network Manager.
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8498] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8504] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:00:23 np0005534696 systemd[1]: Reached target Network.
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8509] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 25 04:00:23 np0005534696 systemd[1]: Starting Network Manager Wait Online...
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8544] dhcp4 (eth0): state changed new lease, address=192.168.26.176
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8553] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 04:00:23 np0005534696 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 25 04:00:23 np0005534696 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8641] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8644] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 04:00:23 np0005534696 NetworkManager[811]: <info>  [1764061223.8658] device (lo): Activation: successful, device activated.
Nov 25 04:00:23 np0005534696 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 25 04:00:23 np0005534696 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 04:00:23 np0005534696 systemd[1]: Reached target NFS client services.
Nov 25 04:00:23 np0005534696 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 04:00:23 np0005534696 systemd[1]: Reached target Remote File Systems.
Nov 25 04:00:23 np0005534696 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 04:00:25 np0005534696 NetworkManager[811]: <info>  [1764061225.5194] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:00:26 np0005534696 NetworkManager[811]: <info>  [1764061226.6120] dhcp6 (eth0): state changed new lease, address=2001:db8::2d4
Nov 25 04:00:28 np0005534696 NetworkManager[811]: <info>  [1764061228.5919] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:00:28 np0005534696 NetworkManager[811]: <info>  [1764061228.5950] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:00:28 np0005534696 NetworkManager[811]: <info>  [1764061228.5951] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:00:28 np0005534696 NetworkManager[811]: <info>  [1764061228.5954] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 04:00:28 np0005534696 NetworkManager[811]: <info>  [1764061228.5958] device (eth0): Activation: successful, device activated.
Nov 25 04:00:28 np0005534696 NetworkManager[811]: <info>  [1764061228.5962] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 04:00:28 np0005534696 NetworkManager[811]: <info>  [1764061228.5965] manager: startup complete
Nov 25 04:00:28 np0005534696 systemd[1]: Finished Network Manager Wait Online.
Nov 25 04:00:28 np0005534696 systemd[1]: Starting Cloud-init: Network Stage...
Nov 25 04:00:28 np0005534696 chronyd[746]: Selected source 50.205.57.38 (2.centos.pool.ntp.org)
Nov 25 04:00:28 np0005534696 chronyd[746]: System clock TAI offset set to 37 seconds
Nov 25 04:00:28 np0005534696 cloud-init[877]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 25 Nov 2025 09:00:28 +0000. Up 10.31 seconds.
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |  eth0  | True |        192.168.26.176        | 255.255.255.0 | global | fa:16:3e:22:25:08 |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |  eth0  | True |      2001:db8::2d4/128       |       .       | global | fa:16:3e:22:25:08 |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |  eth0  | True | fe80::f816:3eff:fe22:2508/64 |       .       |  link  | fa:16:3e:22:25:08 |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   2   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: ++++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: +-------+---------------+-------------+-----------+-------+
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: | Route |  Destination  |   Gateway   | Interface | Flags |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: +-------+---------------+-------------+-----------+-------+
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   1   |  2001:db8::1  |      ::     |    eth0   |   U   |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   2   | 2001:db8::2d4 |      ::     |    eth0   |   U   |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   3   |   fe80::/64   |      ::     |    eth0   |   U   |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   4   |      ::/0     | 2001:db8::1 |    eth0   |   UG  |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   6   |     local     |      ::     |    eth0   |   U   |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   7   |     local     |      ::     |    eth0   |   U   |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: |   8   |   multicast   |      ::     |    eth0   |   U   |
Nov 25 04:00:28 np0005534696 cloud-init[877]: ci-info: +-------+---------------+-------------+-----------+-------+
Nov 25 04:00:29 np0005534696 cloud-init[877]: Generating public/private rsa key pair.
Nov 25 04:00:29 np0005534696 cloud-init[877]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 25 04:00:29 np0005534696 cloud-init[877]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 25 04:00:29 np0005534696 cloud-init[877]: The key fingerprint is:
Nov 25 04:00:29 np0005534696 cloud-init[877]: SHA256:Ni6e2zoIW45sUyMBv1Mnp46HrYXjoaMadQVtoTFy36M root@np0005534696
Nov 25 04:00:29 np0005534696 cloud-init[877]: The key's randomart image is:
Nov 25 04:00:29 np0005534696 cloud-init[877]: +---[RSA 3072]----+
Nov 25 04:00:29 np0005534696 cloud-init[877]: |  . =...         |
Nov 25 04:00:29 np0005534696 cloud-init[877]: |.  o *o.         |
Nov 25 04:00:29 np0005534696 cloud-init[877]: | o  ..o o        |
Nov 25 04:00:29 np0005534696 cloud-init[877]: |  o o.o. .       |
Nov 25 04:00:29 np0005534696 cloud-init[877]: |  .+.=E S        |
Nov 25 04:00:29 np0005534696 cloud-init[877]: | .=o=  o .       |
Nov 25 04:00:29 np0005534696 cloud-init[877]: |..+#.o. .        |
Nov 25 04:00:29 np0005534696 cloud-init[877]: |.+O+*..+         |
Nov 25 04:00:29 np0005534696 cloud-init[877]: |*oo+  =+.        |
Nov 25 04:00:29 np0005534696 cloud-init[877]: +----[SHA256]-----+
Nov 25 04:00:29 np0005534696 cloud-init[877]: Generating public/private ecdsa key pair.
Nov 25 04:00:29 np0005534696 cloud-init[877]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 25 04:00:29 np0005534696 cloud-init[877]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 25 04:00:29 np0005534696 cloud-init[877]: The key fingerprint is:
Nov 25 04:00:29 np0005534696 cloud-init[877]: SHA256:/D+H33FU8WgnkSb1xBR1itfFKjkBEf6kX81QLejyLp4 root@np0005534696
Nov 25 04:00:29 np0005534696 cloud-init[877]: The key's randomart image is:
Nov 25 04:00:29 np0005534696 cloud-init[877]: +---[ECDSA 256]---+
Nov 25 04:00:29 np0005534696 cloud-init[877]: |          ++ ooB@|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |         .  +.+OO|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |          ..o=Bo*|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |       .  .+=o.*.|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |        S .o.o. +|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |         . ... . |
Nov 25 04:00:29 np0005534696 cloud-init[877]: |          .... ..|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |          .o+ ..o|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |         .E..+. .|
Nov 25 04:00:29 np0005534696 cloud-init[877]: +----[SHA256]-----+
Nov 25 04:00:29 np0005534696 cloud-init[877]: Generating public/private ed25519 key pair.
Nov 25 04:00:29 np0005534696 cloud-init[877]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 25 04:00:29 np0005534696 cloud-init[877]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 25 04:00:29 np0005534696 cloud-init[877]: The key fingerprint is:
Nov 25 04:00:29 np0005534696 cloud-init[877]: SHA256:L1hE+95ZZRfGM1u0njo/Qkx7i75bIIaoj7svT0Wpmdw root@np0005534696
Nov 25 04:00:29 np0005534696 cloud-init[877]: The key's randomart image is:
Nov 25 04:00:29 np0005534696 cloud-init[877]: +--[ED25519 256]--+
Nov 25 04:00:29 np0005534696 cloud-init[877]: |        .     .+.|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |       . ..   .++|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |        oo     .O|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |      ..B..  ..+o|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |       *SE.oo.oo |
Nov 25 04:00:29 np0005534696 cloud-init[877]: |      .o.o...*o. |
Nov 25 04:00:29 np0005534696 cloud-init[877]: |     .... o +oo..|
Nov 25 04:00:29 np0005534696 cloud-init[877]: |     .+  .   o+o |
Nov 25 04:00:29 np0005534696 cloud-init[877]: |     +*+    .++..|
Nov 25 04:00:29 np0005534696 cloud-init[877]: +----[SHA256]-----+
Nov 25 04:00:29 np0005534696 systemd[1]: Finished Cloud-init: Network Stage.
Nov 25 04:00:29 np0005534696 systemd[1]: Reached target Cloud-config availability.
Nov 25 04:00:29 np0005534696 systemd[1]: Reached target Network is Online.
Nov 25 04:00:29 np0005534696 systemd[1]: Starting Cloud-init: Config Stage...
Nov 25 04:00:29 np0005534696 systemd[1]: Starting Crash recovery kernel arming...
Nov 25 04:00:29 np0005534696 systemd[1]: Starting Notify NFS peers of a restart...
Nov 25 04:00:29 np0005534696 systemd[1]: Starting System Logging Service...
Nov 25 04:00:29 np0005534696 sm-notify[960]: Version 2.5.4 starting
Nov 25 04:00:29 np0005534696 systemd[1]: Starting OpenSSH server daemon...
Nov 25 04:00:29 np0005534696 systemd[1]: Starting Permit User Sessions...
Nov 25 04:00:29 np0005534696 systemd[1]: Started Notify NFS peers of a restart.
Nov 25 04:00:29 np0005534696 systemd[1]: Finished Permit User Sessions.
Nov 25 04:00:29 np0005534696 systemd[1]: Started Command Scheduler.
Nov 25 04:00:29 np0005534696 systemd[1]: Started Getty on tty1.
Nov 25 04:00:29 np0005534696 systemd[1]: Started Serial Getty on ttyS0.
Nov 25 04:00:29 np0005534696 systemd[1]: Reached target Login Prompts.
Nov 25 04:00:29 np0005534696 systemd[1]: Started OpenSSH server daemon.
Nov 25 04:00:29 np0005534696 rsyslogd[961]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="961" x-info="https://www.rsyslog.com"] start
Nov 25 04:00:29 np0005534696 systemd[1]: Started System Logging Service.
Nov 25 04:00:29 np0005534696 rsyslogd[961]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 25 04:00:29 np0005534696 systemd[1]: Reached target Multi-User System.
Nov 25 04:00:29 np0005534696 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 25 04:00:29 np0005534696 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 25 04:00:29 np0005534696 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 25 04:00:29 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:00:29 np0005534696 kdumpctl[972]: kdump: No kdump initial ramdisk found.
Nov 25 04:00:29 np0005534696 kdumpctl[972]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 25 04:00:30 np0005534696 cloud-init[1119]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 25 Nov 2025 09:00:30 +0000. Up 11.52 seconds.
Nov 25 04:00:30 np0005534696 systemd[1]: Finished Cloud-init: Config Stage.
Nov 25 04:00:30 np0005534696 systemd[1]: Starting Cloud-init: Final Stage...
Nov 25 04:00:30 np0005534696 dracut[1239]: dracut-057-102.git20250818.el9
Nov 25 04:00:30 np0005534696 dracut[1241]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 25 04:00:30 np0005534696 cloud-init[1278]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 25 Nov 2025 09:00:30 +0000. Up 11.87 seconds.
Nov 25 04:00:30 np0005534696 cloud-init[1311]: #############################################################
Nov 25 04:00:30 np0005534696 cloud-init[1312]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 25 04:00:30 np0005534696 cloud-init[1314]: 256 SHA256:/D+H33FU8WgnkSb1xBR1itfFKjkBEf6kX81QLejyLp4 root@np0005534696 (ECDSA)
Nov 25 04:00:30 np0005534696 cloud-init[1316]: 256 SHA256:L1hE+95ZZRfGM1u0njo/Qkx7i75bIIaoj7svT0Wpmdw root@np0005534696 (ED25519)
Nov 25 04:00:30 np0005534696 cloud-init[1318]: 3072 SHA256:Ni6e2zoIW45sUyMBv1Mnp46HrYXjoaMadQVtoTFy36M root@np0005534696 (RSA)
Nov 25 04:00:30 np0005534696 cloud-init[1322]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 25 04:00:30 np0005534696 cloud-init[1323]: #############################################################
Nov 25 04:00:30 np0005534696 cloud-init[1278]: Cloud-init v. 24.4-7.el9 finished at Tue, 25 Nov 2025 09:00:30 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.01 seconds
Nov 25 04:00:30 np0005534696 systemd[1]: Finished Cloud-init: Final Stage.
Nov 25 04:00:30 np0005534696 systemd[1]: Reached target Cloud-init target.
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 04:00:30 np0005534696 dracut[1241]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: memstrack is not available
Nov 25 04:00:31 np0005534696 dracut[1241]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 04:00:31 np0005534696 dracut[1241]: memstrack is not available
Nov 25 04:00:31 np0005534696 dracut[1241]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 04:00:31 np0005534696 dracut[1241]: *** Including module: systemd ***
Nov 25 04:00:31 np0005534696 dracut[1241]: *** Including module: fips ***
Nov 25 04:00:32 np0005534696 dracut[1241]: *** Including module: systemd-initrd ***
Nov 25 04:00:32 np0005534696 dracut[1241]: *** Including module: i18n ***
Nov 25 04:00:32 np0005534696 dracut[1241]: *** Including module: drm ***
Nov 25 04:00:32 np0005534696 dracut[1241]: *** Including module: prefixdevname ***
Nov 25 04:00:32 np0005534696 dracut[1241]: *** Including module: kernel-modules ***
Nov 25 04:00:32 np0005534696 kernel: block vda: the capability attribute has been deprecated.
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: kernel-modules-extra ***
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: qemu ***
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: fstab-sys ***
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: rootfs-block ***
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: terminfo ***
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: udev-rules ***
Nov 25 04:00:33 np0005534696 dracut[1241]: Skipping udev rule: 91-permissions.rules
Nov 25 04:00:33 np0005534696 dracut[1241]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: virtiofs ***
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: dracut-systemd ***
Nov 25 04:00:33 np0005534696 irqbalance[742]: Cannot change IRQ 45 affinity: Operation not permitted
Nov 25 04:00:33 np0005534696 irqbalance[742]: IRQ 45 affinity is now unmanaged
Nov 25 04:00:33 np0005534696 irqbalance[742]: Cannot change IRQ 44 affinity: Operation not permitted
Nov 25 04:00:33 np0005534696 irqbalance[742]: IRQ 44 affinity is now unmanaged
Nov 25 04:00:33 np0005534696 irqbalance[742]: Cannot change IRQ 42 affinity: Operation not permitted
Nov 25 04:00:33 np0005534696 irqbalance[742]: IRQ 42 affinity is now unmanaged
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: usrmount ***
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: base ***
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: fs-lib ***
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: kdumpbase ***
Nov 25 04:00:33 np0005534696 dracut[1241]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 25 04:00:33 np0005534696 dracut[1241]:  microcode_ctl module: mangling fw_dir
Nov 25 04:00:33 np0005534696 dracut[1241]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 25 04:00:33 np0005534696 dracut[1241]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: configuration "intel" is ignored
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 25 04:00:34 np0005534696 dracut[1241]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 25 04:00:34 np0005534696 dracut[1241]: *** Including module: openssl ***
Nov 25 04:00:34 np0005534696 dracut[1241]: *** Including module: shutdown ***
Nov 25 04:00:34 np0005534696 dracut[1241]: *** Including module: squash ***
Nov 25 04:00:34 np0005534696 dracut[1241]: *** Including modules done ***
Nov 25 04:00:34 np0005534696 dracut[1241]: *** Installing kernel module dependencies ***
Nov 25 04:00:35 np0005534696 dracut[1241]: *** Installing kernel module dependencies done ***
Nov 25 04:00:35 np0005534696 dracut[1241]: *** Resolving executable dependencies ***
Nov 25 04:00:36 np0005534696 dracut[1241]: *** Resolving executable dependencies done ***
Nov 25 04:00:36 np0005534696 dracut[1241]: *** Generating early-microcode cpio image ***
Nov 25 04:00:36 np0005534696 dracut[1241]: *** Store current command line parameters ***
Nov 25 04:00:36 np0005534696 dracut[1241]: Stored kernel commandline:
Nov 25 04:00:36 np0005534696 dracut[1241]: No dracut internal kernel commandline stored in the initramfs
Nov 25 04:00:36 np0005534696 dracut[1241]: *** Install squash loader ***
Nov 25 04:00:36 np0005534696 dracut[1241]: *** Squashing the files inside the initramfs ***
Nov 25 04:00:38 np0005534696 dracut[1241]: *** Squashing the files inside the initramfs done ***
Nov 25 04:00:38 np0005534696 dracut[1241]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 25 04:00:38 np0005534696 dracut[1241]: *** Hardlinking files ***
Nov 25 04:00:38 np0005534696 dracut[1241]: *** Hardlinking files done ***
Nov 25 04:00:38 np0005534696 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 04:00:38 np0005534696 dracut[1241]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 25 04:00:39 np0005534696 kdumpctl[972]: kdump: kexec: loaded kdump kernel
Nov 25 04:00:39 np0005534696 kdumpctl[972]: kdump: Starting kdump: [OK]
Nov 25 04:00:39 np0005534696 systemd[1]: Finished Crash recovery kernel arming.
Nov 25 04:00:39 np0005534696 systemd[1]: Startup finished in 1.230s (kernel) + 1.998s (initrd) + 17.436s (userspace) = 20.665s.
Nov 25 04:00:52 np0005534696 systemd[1]: Created slice User Slice of UID 1000.
Nov 25 04:00:52 np0005534696 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 25 04:00:52 np0005534696 systemd-logind[744]: New session 1 of user zuul.
Nov 25 04:00:52 np0005534696 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 25 04:00:52 np0005534696 systemd[1]: Starting User Manager for UID 1000...
Nov 25 04:00:52 np0005534696 systemd[4370]: Queued start job for default target Main User Target.
Nov 25 04:00:52 np0005534696 systemd[4370]: Created slice User Application Slice.
Nov 25 04:00:52 np0005534696 systemd[4370]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 04:00:52 np0005534696 systemd[4370]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 04:00:52 np0005534696 systemd[4370]: Reached target Paths.
Nov 25 04:00:52 np0005534696 systemd[4370]: Reached target Timers.
Nov 25 04:00:52 np0005534696 systemd[4370]: Starting D-Bus User Message Bus Socket...
Nov 25 04:00:52 np0005534696 systemd[4370]: Starting Create User's Volatile Files and Directories...
Nov 25 04:00:52 np0005534696 systemd[4370]: Listening on D-Bus User Message Bus Socket.
Nov 25 04:00:52 np0005534696 systemd[4370]: Reached target Sockets.
Nov 25 04:00:52 np0005534696 systemd[4370]: Finished Create User's Volatile Files and Directories.
Nov 25 04:00:52 np0005534696 systemd[4370]: Reached target Basic System.
Nov 25 04:00:52 np0005534696 systemd[4370]: Reached target Main User Target.
Nov 25 04:00:52 np0005534696 systemd[4370]: Startup finished in 88ms.
Nov 25 04:00:52 np0005534696 systemd[1]: Started User Manager for UID 1000.
Nov 25 04:00:52 np0005534696 systemd[1]: Started Session 1 of User zuul.
Nov 25 04:00:53 np0005534696 python3[4452]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:00:53 np0005534696 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 04:00:56 np0005534696 python3[4482]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:01:00 np0005534696 python3[4536]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:01:01 np0005534696 python3[4576]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 25 04:01:03 np0005534696 python3[4617]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+gNTjTZDQgtUOszUcfFNwRDhaF3fpuKv4WnYmO9LCSnBOvxKg32kLsWN4UIUhuvnqQCzM+/poM7RT3r9cQ1IsDccOYvVT/Wtp5oKX+m81fz8DhCMYa72X9A2pIXwxQsBgRDPh3oTqqaSR8H+rObzkL49NEB7PB37PSqa7bTT+RtyPa94m/b+vmwdC/CwfC0YTEjQEMXEM2Mx4n7pVA/kVzra/ScNFDdQaJmKWoA28J/ubqkvnvrg0+Z4ywfQ/0sBAXWNOR6LvQ2x4Rqd3uiHgobysScVRo2/+J5NDB1wN+flg8+oxSlhauY+97xKn03faiQ5y1cEiMT5A0Bhn89bTx0VUxzmNXXtQVA9xv3gSfMyOpzGaqf9n4N8yedXl6TXe+ascB5uWelrP6b2aqonb4EtqM7AZYKSLWXDwn7czhaMjUge52BUOKmb0asJdlTXpqZdVVMPfBnYGKIE8DNcp99rTtP5JwVDYKitUQAB45plvpUUYoKYI9h79SFYkhws= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:03 np0005534696 python3[4641]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:03 np0005534696 python3[4740]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:01:04 np0005534696 python3[4811]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764061263.5684626-254-98463217895056/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=46be1ba69aef4b9caa3787efccecaa0c_id_rsa follow=False checksum=ab873ac71b169d81ba60edcb9a3df54902eb3861 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:04 np0005534696 python3[4934]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:01:04 np0005534696 python3[5005]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764061264.2093303-309-210905589554397/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=46be1ba69aef4b9caa3787efccecaa0c_id_rsa.pub follow=False checksum=bf193b190ac8dfe414ab48ea4e2bf3db22ed6209 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:05 np0005534696 python3[5053]: ansible-ping Invoked with data=pong
Nov 25 04:01:06 np0005534696 python3[5077]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:01:07 np0005534696 python3[5131]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 25 04:01:08 np0005534696 python3[5163]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:09 np0005534696 python3[5187]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:09 np0005534696 python3[5211]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:09 np0005534696 python3[5235]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:09 np0005534696 python3[5259]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:09 np0005534696 python3[5283]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:11 np0005534696 python3[5309]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:11 np0005534696 python3[5387]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:01:11 np0005534696 python3[5460]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061271.264718-35-18482863993663/source follow=False _original_basename=mirror_info.sh.j2 checksum=3f92644b791816833989d215b9a84c589a7b8ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:12 np0005534696 python3[5508]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:12 np0005534696 python3[5532]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:12 np0005534696 python3[5556]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:13 np0005534696 python3[5580]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:13 np0005534696 python3[5604]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:13 np0005534696 python3[5628]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:13 np0005534696 python3[5652]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:13 np0005534696 python3[5676]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:14 np0005534696 python3[5700]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:14 np0005534696 python3[5724]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:14 np0005534696 python3[5748]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:14 np0005534696 python3[5772]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:14 np0005534696 python3[5796]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:15 np0005534696 python3[5820]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:15 np0005534696 python3[5844]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:15 np0005534696 python3[5868]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:15 np0005534696 python3[5892]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:15 np0005534696 python3[5916]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:16 np0005534696 python3[5940]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:16 np0005534696 python3[5964]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:16 np0005534696 python3[5988]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:16 np0005534696 python3[6012]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:17 np0005534696 python3[6036]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:17 np0005534696 python3[6060]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:17 np0005534696 python3[6084]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:17 np0005534696 python3[6108]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:01:19 np0005534696 python3[6134]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 04:01:19 np0005534696 systemd[1]: Starting Time & Date Service...
Nov 25 04:01:19 np0005534696 systemd[1]: Started Time & Date Service.
Nov 25 04:01:19 np0005534696 systemd-timedated[6136]: Changed time zone to 'UTC' (UTC).
Nov 25 04:01:20 np0005534696 python3[6165]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:20 np0005534696 python3[6241]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:01:20 np0005534696 python3[6312]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764061280.1871989-255-201994128940304/source _original_basename=tmp5mi8s5j8 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:20 np0005534696 python3[6412]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:01:21 np0005534696 python3[6483]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764061280.7650034-305-153673424595742/source _original_basename=tmpsrl5lkg2 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:21 np0005534696 python3[6585]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:01:22 np0005534696 python3[6658]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764061281.6281288-384-230650190852067/source _original_basename=tmpg738xobv follow=False checksum=40e1330a62b6ed10b91eb73bc87d4fe6f8307b98 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:22 np0005534696 python3[6706]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:01:22 np0005534696 python3[6732]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:01:22 np0005534696 python3[6812]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:01:23 np0005534696 python3[6885]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061282.74603-455-166050921806934/source _original_basename=tmps91xdfof follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:23 np0005534696 python3[6936]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e08-49e2-bfa5-76fa-00000000001f-1-compute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:01:24 np0005534696 python3[6964]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e08-49e2-bfa5-76fa-000000000020-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 25 04:01:25 np0005534696 python3[6992]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:33 np0005534696 irqbalance[742]: Cannot change IRQ 43 affinity: Operation not permitted
Nov 25 04:01:33 np0005534696 irqbalance[742]: IRQ 43 affinity is now unmanaged
Nov 25 04:01:34 np0005534696 chronyd[746]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Nov 25 04:01:41 np0005534696 python3[7018]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:01:49 np0005534696 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 04:02:41 np0005534696 systemd-logind[744]: Session 1 logged out. Waiting for processes to exit.
Nov 25 04:02:45 np0005534696 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Nov 25 04:02:45 np0005534696 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 25 04:02:45 np0005534696 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 25 04:02:45 np0005534696 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Nov 25 04:02:45 np0005534696 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Nov 25 04:02:45 np0005534696 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Nov 25 04:02:45 np0005534696 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Nov 25 04:02:45 np0005534696 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Nov 25 04:02:45 np0005534696 NetworkManager[811]: <info>  [1764061365.1817] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 04:02:45 np0005534696 systemd-udevd[7021]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:02:45 np0005534696 NetworkManager[811]: <info>  [1764061365.2074] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:02:45 np0005534696 NetworkManager[811]: <info>  [1764061365.2097] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 25 04:02:45 np0005534696 NetworkManager[811]: <info>  [1764061365.2101] device (eth1): carrier: link connected
Nov 25 04:02:45 np0005534696 NetworkManager[811]: <info>  [1764061365.2103] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 04:02:45 np0005534696 NetworkManager[811]: <info>  [1764061365.2109] policy: auto-activating connection 'Wired connection 1' (d6eee9d1-a2ad-35b5-85ea-83b8e5763015)
Nov 25 04:02:45 np0005534696 NetworkManager[811]: <info>  [1764061365.2112] device (eth1): Activation: starting connection 'Wired connection 1' (d6eee9d1-a2ad-35b5-85ea-83b8e5763015)
Nov 25 04:02:45 np0005534696 NetworkManager[811]: <info>  [1764061365.2113] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:02:45 np0005534696 NetworkManager[811]: <info>  [1764061365.2117] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:02:45 np0005534696 NetworkManager[811]: <info>  [1764061365.2121] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:02:45 np0005534696 NetworkManager[811]: <info>  [1764061365.2125] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:02:45 np0005534696 systemd-logind[744]: New session 3 of user zuul.
Nov 25 04:02:45 np0005534696 systemd[1]: Started Session 3 of User zuul.
Nov 25 04:02:45 np0005534696 python3[7052]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e08-49e2-c32a-ccd7-0000000001f6-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:02:55 np0005534696 python3[7132]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:02:55 np0005534696 python3[7205]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764061375.3531835-212-180317121839396/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=e2ae886696d50e84bba4a88fd0692d3c904ae961 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:02:56 np0005534696 python3[7255]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:02:56 np0005534696 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 04:02:56 np0005534696 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 04:02:56 np0005534696 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 04:02:56 np0005534696 systemd[1]: Stopping Network Manager...
Nov 25 04:02:56 np0005534696 NetworkManager[811]: <info>  [1764061376.1755] caught SIGTERM, shutting down normally.
Nov 25 04:02:56 np0005534696 NetworkManager[811]: <info>  [1764061376.1763] dhcp4 (eth0): canceled DHCP transaction
Nov 25 04:02:56 np0005534696 NetworkManager[811]: <info>  [1764061376.1763] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:02:56 np0005534696 NetworkManager[811]: <info>  [1764061376.1763] dhcp4 (eth0): state changed no lease
Nov 25 04:02:56 np0005534696 NetworkManager[811]: <info>  [1764061376.1765] dhcp6 (eth0): canceled DHCP transaction
Nov 25 04:02:56 np0005534696 NetworkManager[811]: <info>  [1764061376.1765] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:02:56 np0005534696 NetworkManager[811]: <info>  [1764061376.1765] dhcp6 (eth0): state changed no lease
Nov 25 04:02:56 np0005534696 NetworkManager[811]: <info>  [1764061376.1767] manager: NetworkManager state is now CONNECTING
Nov 25 04:02:56 np0005534696 NetworkManager[811]: <info>  [1764061376.1837] dhcp4 (eth1): canceled DHCP transaction
Nov 25 04:02:56 np0005534696 NetworkManager[811]: <info>  [1764061376.1838] dhcp4 (eth1): state changed no lease
Nov 25 04:02:56 np0005534696 NetworkManager[811]: <info>  [1764061376.1858] exiting (success)
Nov 25 04:02:56 np0005534696 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 04:02:56 np0005534696 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 04:02:56 np0005534696 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 04:02:56 np0005534696 systemd[1]: Stopped Network Manager.
Nov 25 04:02:56 np0005534696 systemd[1]: Starting Network Manager...
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2279] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:64a5e7c0-2269-451f-b570-b292d4bfbc96)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2280] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2322] manager[0x55d6d9645090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 04:02:56 np0005534696 systemd[1]: Starting Hostname Service...
Nov 25 04:02:56 np0005534696 systemd[1]: Started Hostname Service.
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2827] hostname: hostname: using hostnamed
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2828] hostname: static hostname changed from (none) to "np0005534696"
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2830] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2833] manager[0x55d6d9645090]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2833] manager[0x55d6d9645090]: rfkill: WWAN hardware radio set enabled
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2852] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2853] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2853] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2853] manager: Networking is enabled by state file
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2855] settings: Loaded settings plugin: keyfile (internal)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2858] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2878] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2885] dhcp: init: Using DHCP client 'internal'
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2887] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2891] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2895] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2901] device (lo): Activation: starting connection 'lo' (82e74c3e-4080-479b-a643-546a3994c15a)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2906] device (eth0): carrier: link connected
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2909] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2913] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2913] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2918] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2923] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2927] device (eth1): carrier: link connected
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2931] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2934] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (d6eee9d1-a2ad-35b5-85ea-83b8e5763015) (indicated)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2934] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2939] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2943] device (eth1): Activation: starting connection 'Wired connection 1' (d6eee9d1-a2ad-35b5-85ea-83b8e5763015)
Nov 25 04:02:56 np0005534696 systemd[1]: Started Network Manager.
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2948] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2951] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2954] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2956] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2958] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2960] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2962] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2964] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2966] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2971] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2974] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2977] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2978] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2984] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2989] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2994] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.2997] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.3000] device (lo): Activation: successful, device activated.
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.3004] dhcp4 (eth0): state changed new lease, address=192.168.26.176
Nov 25 04:02:56 np0005534696 NetworkManager[7265]: <info>  [1764061376.3009] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 04:02:56 np0005534696 systemd[1]: Starting Network Manager Wait Online...
Nov 25 04:02:56 np0005534696 python3[7327]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e08-49e2-c32a-ccd7-0000000000d3-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:02:57 np0005534696 NetworkManager[7265]: <info>  [1764061377.3981] dhcp6 (eth0): state changed new lease, address=2001:db8::2d4
Nov 25 04:02:57 np0005534696 NetworkManager[7265]: <info>  [1764061377.3991] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 04:02:57 np0005534696 NetworkManager[7265]: <info>  [1764061377.4014] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 04:02:57 np0005534696 NetworkManager[7265]: <info>  [1764061377.4015] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 04:02:57 np0005534696 NetworkManager[7265]: <info>  [1764061377.4018] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 04:02:57 np0005534696 NetworkManager[7265]: <info>  [1764061377.4021] device (eth0): Activation: successful, device activated.
Nov 25 04:02:57 np0005534696 NetworkManager[7265]: <info>  [1764061377.4024] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 04:03:07 np0005534696 systemd[4370]: Starting Mark boot as successful...
Nov 25 04:03:07 np0005534696 systemd[4370]: Finished Mark boot as successful.
Nov 25 04:03:07 np0005534696 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 04:03:26 np0005534696 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5428] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 04:03:41 np0005534696 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 04:03:41 np0005534696 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5639] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5641] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5646] device (eth1): Activation: successful, device activated.
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5651] manager: startup complete
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5652] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <warn>  [1764061421.5656] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5663] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 25 04:03:41 np0005534696 systemd[1]: Finished Network Manager Wait Online.
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5710] dhcp4 (eth1): canceled DHCP transaction
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5710] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5710] dhcp4 (eth1): state changed no lease
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5721] policy: auto-activating connection 'ci-private-network' (127bddd6-bc23-50ff-8d29-a76aaf0a900f)
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5724] device (eth1): Activation: starting connection 'ci-private-network' (127bddd6-bc23-50ff-8d29-a76aaf0a900f)
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5725] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5728] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5735] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5741] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5767] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5768] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:03:41 np0005534696 NetworkManager[7265]: <info>  [1764061421.5775] device (eth1): Activation: successful, device activated.
Nov 25 04:03:45 np0005534696 python3[7453]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:03:45 np0005534696 python3[7526]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061425.4152899-379-165731976963066/source _original_basename=tmpj4tq53fd follow=False checksum=5493b85a684a9b4806ca892e69594374a0bfd8b8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:03:47 np0005534696 systemd[1]: session-3.scope: Deactivated successfully.
Nov 25 04:03:47 np0005534696 systemd[1]: session-3.scope: Consumed 1.635s CPU time.
Nov 25 04:03:47 np0005534696 systemd-logind[744]: Session 3 logged out. Waiting for processes to exit.
Nov 25 04:03:47 np0005534696 systemd-logind[744]: Removed session 3.
Nov 25 04:03:51 np0005534696 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 04:06:07 np0005534696 systemd[4370]: Created slice User Background Tasks Slice.
Nov 25 04:06:07 np0005534696 systemd[4370]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 04:06:07 np0005534696 systemd[4370]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 04:08:20 np0005534696 systemd-logind[744]: New session 4 of user zuul.
Nov 25 04:08:21 np0005534696 systemd[1]: Started Session 4 of User zuul.
Nov 25 04:08:21 np0005534696 python3[7583]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e08-49e2-292d-b97b-000000001cda-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:08:21 np0005534696 python3[7612]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:08:21 np0005534696 python3[7638]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:08:21 np0005534696 python3[7664]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:08:22 np0005534696 python3[7690]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:08:22 np0005534696 python3[7716]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:08:23 np0005534696 python3[7794]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:08:23 np0005534696 python3[7867]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061702.873301-513-224537730660499/source _original_basename=tmpmim1boqx follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:08:24 np0005534696 python3[7917]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 04:08:24 np0005534696 systemd[1]: Reloading.
Nov 25 04:08:24 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:08:25 np0005534696 python3[7973]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 25 04:08:25 np0005534696 python3[7999]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:08:25 np0005534696 python3[8027]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:08:26 np0005534696 python3[8055]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:08:26 np0005534696 python3[8083]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:08:26 np0005534696 python3[8110]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e08-49e2-292d-b97b-000000001ce1-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:08:27 np0005534696 python3[8140]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 04:08:29 np0005534696 systemd-logind[744]: Session 4 logged out. Waiting for processes to exit.
Nov 25 04:08:29 np0005534696 systemd[1]: session-4.scope: Deactivated successfully.
Nov 25 04:08:29 np0005534696 systemd[1]: session-4.scope: Consumed 3.121s CPU time.
Nov 25 04:08:29 np0005534696 systemd-logind[744]: Removed session 4.
Nov 25 04:08:31 np0005534696 systemd-logind[744]: New session 5 of user zuul.
Nov 25 04:08:31 np0005534696 systemd[1]: Started Session 5 of User zuul.
Nov 25 04:08:31 np0005534696 python3[8175]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 04:08:57 np0005534696 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 04:08:57 np0005534696 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 04:08:57 np0005534696 kernel: SELinux:  policy capability open_perms=1
Nov 25 04:08:57 np0005534696 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 04:08:57 np0005534696 kernel: SELinux:  policy capability always_check_network=0
Nov 25 04:08:57 np0005534696 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 04:08:57 np0005534696 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 04:08:57 np0005534696 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 04:09:03 np0005534696 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 04:09:03 np0005534696 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 04:09:03 np0005534696 kernel: SELinux:  policy capability open_perms=1
Nov 25 04:09:03 np0005534696 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 04:09:03 np0005534696 kernel: SELinux:  policy capability always_check_network=0
Nov 25 04:09:03 np0005534696 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 04:09:03 np0005534696 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 04:09:03 np0005534696 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 04:09:10 np0005534696 kernel: SELinux:  Converting 385 SID table entries...
Nov 25 04:09:10 np0005534696 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 04:09:10 np0005534696 kernel: SELinux:  policy capability open_perms=1
Nov 25 04:09:10 np0005534696 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 04:09:10 np0005534696 kernel: SELinux:  policy capability always_check_network=0
Nov 25 04:09:10 np0005534696 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 04:09:10 np0005534696 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 04:09:10 np0005534696 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 04:09:11 np0005534696 setsebool[8244]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 25 04:09:11 np0005534696 setsebool[8244]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 25 04:09:19 np0005534696 kernel: SELinux:  Converting 388 SID table entries...
Nov 25 04:09:19 np0005534696 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 04:09:19 np0005534696 kernel: SELinux:  policy capability open_perms=1
Nov 25 04:09:19 np0005534696 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 04:09:19 np0005534696 kernel: SELinux:  policy capability always_check_network=0
Nov 25 04:09:19 np0005534696 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 04:09:19 np0005534696 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 04:09:19 np0005534696 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 04:09:32 np0005534696 dbus-broker-launch[731]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 04:09:32 np0005534696 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 04:09:32 np0005534696 systemd[1]: Starting man-db-cache-update.service...
Nov 25 04:09:32 np0005534696 systemd[1]: Reloading.
Nov 25 04:09:32 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:09:32 np0005534696 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 04:09:35 np0005534696 python3[13080]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e08-49e2-29e8-c457-00000000000c-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:09:36 np0005534696 kernel: evm: overlay not supported
Nov 25 04:09:36 np0005534696 systemd[4370]: Starting D-Bus User Message Bus...
Nov 25 04:09:36 np0005534696 dbus-broker-launch[14034]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 25 04:09:36 np0005534696 dbus-broker-launch[14034]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 25 04:09:36 np0005534696 systemd[4370]: Started D-Bus User Message Bus.
Nov 25 04:09:36 np0005534696 dbus-broker-lau[14034]: Ready
Nov 25 04:09:36 np0005534696 systemd[4370]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 04:09:36 np0005534696 systemd[4370]: Created slice Slice /user.
Nov 25 04:09:36 np0005534696 systemd[4370]: podman-14017.scope: unit configures an IP firewall, but not running as root.
Nov 25 04:09:36 np0005534696 systemd[4370]: (This warning is only shown for the first unit using IP firewalling.)
Nov 25 04:09:36 np0005534696 systemd[4370]: Started podman-14017.scope.
Nov 25 04:09:36 np0005534696 systemd[4370]: Started podman-pause-307427b4.scope.
Nov 25 04:09:37 np0005534696 systemd[1]: session-5.scope: Deactivated successfully.
Nov 25 04:09:37 np0005534696 systemd[1]: session-5.scope: Consumed 44.795s CPU time.
Nov 25 04:09:37 np0005534696 systemd-logind[744]: Session 5 logged out. Waiting for processes to exit.
Nov 25 04:09:37 np0005534696 systemd-logind[744]: Removed session 5.
Nov 25 04:09:55 np0005534696 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 04:09:55 np0005534696 systemd[1]: Finished man-db-cache-update.service.
Nov 25 04:09:55 np0005534696 systemd[1]: man-db-cache-update.service: Consumed 27.697s CPU time.
Nov 25 04:09:55 np0005534696 systemd[1]: run-rfdfadad0e2ac43ae9d49bf5acd70b90d.service: Deactivated successfully.
Nov 25 04:10:02 np0005534696 systemd-logind[744]: New session 6 of user zuul.
Nov 25 04:10:02 np0005534696 systemd[1]: Started Session 6 of User zuul.
Nov 25 04:10:02 np0005534696 python3[29657]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB4HO9/Pb272mGI+U/szIpw/9oHLx4rtGIraRz1dlV41+TJMU38ktCW6c/rIbXW5YjEe8m7up3kNe2OypGHdxy8= zuul@np0005534693#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:10:02 np0005534696 python3[29683]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB4HO9/Pb272mGI+U/szIpw/9oHLx4rtGIraRz1dlV41+TJMU38ktCW6c/rIbXW5YjEe8m7up3kNe2OypGHdxy8= zuul@np0005534693#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:10:03 np0005534696 python3[29709]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005534696 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 25 04:10:03 np0005534696 python3[29743]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB4HO9/Pb272mGI+U/szIpw/9oHLx4rtGIraRz1dlV41+TJMU38ktCW6c/rIbXW5YjEe8m7up3kNe2OypGHdxy8= zuul@np0005534693#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 04:10:03 np0005534696 python3[29821]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:10:04 np0005534696 python3[29894]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764061803.7480295-155-150062425833753/source _original_basename=tmpcis2a6qp follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:10:05 np0005534696 python3[29944]: ansible-ansible.builtin.hostname Invoked with name=compute-2 use=systemd
Nov 25 04:10:05 np0005534696 systemd[1]: Starting Hostname Service...
Nov 25 04:10:05 np0005534696 systemd[1]: Started Hostname Service.
Nov 25 04:10:05 np0005534696 systemd-hostnamed[29948]: Changed pretty hostname to 'compute-2'
Nov 25 04:10:05 np0005534696 systemd-hostnamed[29948]: Hostname set to <compute-2> (static)
Nov 25 04:10:05 np0005534696 NetworkManager[7265]: <info>  [1764061805.1349] hostname: static hostname changed from "np0005534696" to "compute-2"
Nov 25 04:10:05 np0005534696 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 04:10:05 np0005534696 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 04:10:05 np0005534696 systemd[1]: session-6.scope: Deactivated successfully.
Nov 25 04:10:05 np0005534696 systemd-logind[744]: Session 6 logged out. Waiting for processes to exit.
Nov 25 04:10:05 np0005534696 systemd[1]: session-6.scope: Consumed 1.670s CPU time.
Nov 25 04:10:05 np0005534696 systemd-logind[744]: Removed session 6.
Nov 25 04:10:15 np0005534696 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 04:10:35 np0005534696 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 04:13:24 np0005534696 systemd-logind[744]: New session 7 of user zuul.
Nov 25 04:13:24 np0005534696 systemd[1]: Started Session 7 of User zuul.
Nov 25 04:13:24 np0005534696 python3[30044]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:13:26 np0005534696 python3[30156]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:13:26 np0005534696 python3[30229]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9645495-34384-97551799367139/source mode=0755 _original_basename=delorean.repo follow=False checksum=cdee622b4b81aba8f448eb3a0d6bf38022474867 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:13:26 np0005534696 python3[30255]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:13:26 np0005534696 python3[30328]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9645495-34384-97551799367139/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=717d1fa230cffa8c08764d71bd0b4a50d3a90cae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:13:27 np0005534696 python3[30354]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:13:27 np0005534696 python3[30427]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9645495-34384-97551799367139/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=8163d09913b97597f86e38eb45c3003e91da783e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:13:27 np0005534696 python3[30453]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:13:27 np0005534696 python3[30526]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9645495-34384-97551799367139/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=d108d0750ad5b288ccc41bc6534ea307cc51e987 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:13:27 np0005534696 python3[30552]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:13:28 np0005534696 python3[30625]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9645495-34384-97551799367139/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=20c3917c672c059a872cf09a437f61890d2f89fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:13:28 np0005534696 python3[30651]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:13:28 np0005534696 python3[30724]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9645495-34384-97551799367139/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=4d14f168e8a0e6930d905faffbcdf4fedd6664d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:13:28 np0005534696 python3[30750]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:13:28 np0005534696 python3[30823]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764062005.9645495-34384-97551799367139/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:13:38 np0005534696 python3[30871]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:15:57 np0005534696 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 25 04:15:57 np0005534696 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 25 04:15:57 np0005534696 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 25 04:15:57 np0005534696 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 25 04:18:37 np0005534696 systemd[1]: session-7.scope: Deactivated successfully.
Nov 25 04:18:37 np0005534696 systemd[1]: session-7.scope: Consumed 3.358s CPU time.
Nov 25 04:18:37 np0005534696 systemd-logind[744]: Session 7 logged out. Waiting for processes to exit.
Nov 25 04:18:37 np0005534696 systemd-logind[744]: Removed session 7.
Nov 25 04:23:02 np0005534696 systemd-logind[744]: New session 8 of user zuul.
Nov 25 04:23:02 np0005534696 systemd[1]: Started Session 8 of User zuul.
Nov 25 04:23:03 np0005534696 python3.9[31030]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:23:04 np0005534696 python3.9[31211]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:23:12 np0005534696 systemd[1]: session-8.scope: Deactivated successfully.
Nov 25 04:23:12 np0005534696 systemd[1]: session-8.scope: Consumed 6.143s CPU time.
Nov 25 04:23:12 np0005534696 systemd-logind[744]: Session 8 logged out. Waiting for processes to exit.
Nov 25 04:23:12 np0005534696 systemd-logind[744]: Removed session 8.
Nov 25 04:23:28 np0005534696 systemd-logind[744]: New session 9 of user zuul.
Nov 25 04:23:28 np0005534696 systemd[1]: Started Session 9 of User zuul.
Nov 25 04:23:28 np0005534696 python3.9[31423]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 04:23:29 np0005534696 python3.9[31597]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:23:30 np0005534696 python3.9[31749]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:23:31 np0005534696 python3.9[31902]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:23:31 np0005534696 python3.9[32054]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:23:32 np0005534696 python3.9[32206]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:23:33 np0005534696 python3.9[32329]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062612.1814992-180-80022960868666/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:23:33 np0005534696 python3.9[32481]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:23:34 np0005534696 python3.9[32637]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:23:34 np0005534696 python3.9[32789]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:23:35 np0005534696 python3.9[32939]: ansible-ansible.builtin.service_facts Invoked
Nov 25 04:23:37 np0005534696 python3.9[33192]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:23:38 np0005534696 python3.9[33342]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:23:39 np0005534696 python3.9[33496]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:23:40 np0005534696 python3.9[33654]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:23:40 np0005534696 python3.9[33738]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:24:47 np0005534696 systemd[1]: Reloading.
Nov 25 04:24:47 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:24:47 np0005534696 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 25 04:24:48 np0005534696 systemd[1]: Reloading.
Nov 25 04:24:48 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:24:48 np0005534696 systemd[1]: Starting dnf makecache...
Nov 25 04:24:48 np0005534696 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 25 04:24:48 np0005534696 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 25 04:24:48 np0005534696 systemd[1]: Reloading.
Nov 25 04:24:48 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:24:48 np0005534696 dnf[33988]: Failed determining last makecache time.
Nov 25 04:24:48 np0005534696 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 25 04:24:48 np0005534696 dnf[33988]: delorean-openstack-barbican-42b4c41831408a8e323  22 kB/s | 3.0 kB     00:00
Nov 25 04:24:48 np0005534696 dbus-broker-launch[725]: Noticed file-system modification, trigger reload.
Nov 25 04:24:48 np0005534696 dbus-broker-launch[725]: Noticed file-system modification, trigger reload.
Nov 25 04:24:48 np0005534696 dbus-broker-launch[725]: Noticed file-system modification, trigger reload.
Nov 25 04:24:48 np0005534696 dnf[33988]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7  22 kB/s | 3.0 kB     00:00
Nov 25 04:24:48 np0005534696 dnf[33988]: delorean-openstack-cinder-1c00d6490d88e436f26ef  22 kB/s | 3.0 kB     00:00
Nov 25 04:24:49 np0005534696 dnf[33988]: delorean-python-stevedore-c4acc5639fd2329372142  21 kB/s | 3.0 kB     00:00
Nov 25 04:24:49 np0005534696 dnf[33988]: delorean-python-observabilityclient-2f31846d73c  21 kB/s | 3.0 kB     00:00
Nov 25 04:24:49 np0005534696 dnf[33988]: delorean-os-net-config-bbae2ed8a159b0435a473f38  22 kB/s | 3.0 kB     00:00
Nov 25 04:24:49 np0005534696 dnf[33988]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6  22 kB/s | 3.0 kB     00:00
Nov 25 04:24:49 np0005534696 dnf[33988]: delorean-python-designate-tests-tempest-347fdbc  22 kB/s | 3.0 kB     00:00
Nov 25 04:24:49 np0005534696 dnf[33988]: delorean-openstack-glance-1fd12c29b339f30fe823e  20 kB/s | 3.0 kB     00:00
Nov 25 04:24:49 np0005534696 dnf[33988]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  21 kB/s | 3.0 kB     00:00
Nov 25 04:24:50 np0005534696 dnf[33988]: delorean-openstack-manila-3c01b7181572c95dac462  23 kB/s | 3.0 kB     00:00
Nov 25 04:24:50 np0005534696 dnf[33988]: delorean-python-whitebox-neutron-tests-tempest-  22 kB/s | 3.0 kB     00:00
Nov 25 04:24:50 np0005534696 dnf[33988]: delorean-openstack-octavia-ba397f07a7331190208c  23 kB/s | 3.0 kB     00:00
Nov 25 04:24:50 np0005534696 dnf[33988]: delorean-openstack-watcher-c014f81a8647287f6dcc  21 kB/s | 3.0 kB     00:00
Nov 25 04:24:50 np0005534696 dnf[33988]: delorean-python-tcib-1124124ec06aadbac34f0d340b  21 kB/s | 3.0 kB     00:00
Nov 25 04:24:51 np0005534696 dnf[33988]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158  22 kB/s | 3.0 kB     00:00
Nov 25 04:24:51 np0005534696 dnf[33988]: delorean-openstack-swift-dc98a8463506ac520c469a  22 kB/s | 3.0 kB     00:00
Nov 25 04:24:51 np0005534696 dnf[33988]: delorean-python-tempestconf-8515371b7cceebd4282  22 kB/s | 3.0 kB     00:00
Nov 25 04:24:51 np0005534696 dnf[33988]: delorean-openstack-heat-ui-013accbfd179753bc3f0  23 kB/s | 3.0 kB     00:00
Nov 25 04:24:52 np0005534696 dnf[33988]: CentOS Stream 9 - BaseOS                        3.8 kB/s | 5.4 kB     00:01
Nov 25 04:24:53 np0005534696 dnf[33988]: CentOS Stream 9 - AppStream                      16 kB/s | 6.1 kB     00:00
Nov 25 04:24:53 np0005534696 dnf[33988]: CentOS Stream 9 - CRB                            13 kB/s | 5.3 kB     00:00
Nov 25 04:24:54 np0005534696 dnf[33988]: CentOS Stream 9 - Extras packages                20 kB/s | 8.3 kB     00:00
Nov 25 04:24:54 np0005534696 dnf[33988]: dlrn-antelope-testing                            23 kB/s | 3.0 kB     00:00
Nov 25 04:24:54 np0005534696 dnf[33988]: dlrn-antelope-build-deps                         22 kB/s | 3.0 kB     00:00
Nov 25 04:24:57 np0005534696 dnf[33988]: centos9-rabbitmq                                1.1 kB/s | 3.0 kB     00:02
Nov 25 04:24:58 np0005534696 dnf[33988]: centos9-storage                                 2.1 kB/s | 3.0 kB     00:01
Nov 25 04:24:59 np0005534696 dnf[33988]: centos9-opstools                                7.1 kB/s | 3.0 kB     00:00
Nov 25 04:24:59 np0005534696 dnf[33988]: NFV SIG OpenvSwitch                             7.1 kB/s | 3.0 kB     00:00
Nov 25 04:25:00 np0005534696 dnf[33988]: repo-setup-centos-appstream                      10 kB/s | 4.4 kB     00:00
Nov 25 04:25:00 np0005534696 dnf[33988]: repo-setup-centos-baseos                        9.2 kB/s | 3.9 kB     00:00
Nov 25 04:25:00 np0005534696 dnf[33988]: repo-setup-centos-highavailability              9.1 kB/s | 3.9 kB     00:00
Nov 25 04:25:02 np0005534696 dnf[33988]: repo-setup-centos-powertools                    2.2 kB/s | 4.3 kB     00:01
Nov 25 04:25:03 np0005534696 dnf[33988]: Extra Packages for Enterprise Linux 9 - x86_64   43 kB/s |  31 kB     00:00
Nov 25 04:25:04 np0005534696 dnf[33988]: Metadata cache created.
Nov 25 04:25:04 np0005534696 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 04:25:04 np0005534696 systemd[1]: Finished dnf makecache.
Nov 25 04:25:04 np0005534696 systemd[1]: dnf-makecache.service: Consumed 1.303s CPU time.
Nov 25 04:25:32 np0005534696 kernel: SELinux:  Converting 2715 SID table entries...
Nov 25 04:25:32 np0005534696 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 04:25:32 np0005534696 kernel: SELinux:  policy capability open_perms=1
Nov 25 04:25:32 np0005534696 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 04:25:32 np0005534696 kernel: SELinux:  policy capability always_check_network=0
Nov 25 04:25:32 np0005534696 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 04:25:32 np0005534696 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 04:25:32 np0005534696 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 04:25:32 np0005534696 dbus-broker-launch[731]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 25 04:25:32 np0005534696 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 04:25:32 np0005534696 systemd[1]: Starting man-db-cache-update.service...
Nov 25 04:25:32 np0005534696 systemd[1]: Reloading.
Nov 25 04:25:32 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:25:32 np0005534696 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 04:25:33 np0005534696 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 04:25:33 np0005534696 systemd[1]: Finished man-db-cache-update.service.
Nov 25 04:25:33 np0005534696 systemd[1]: run-r0b11d99c6d134b69a6a43b4466bbacd7.service: Deactivated successfully.
Nov 25 04:26:02 np0005534696 python3.9[35272]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:26:03 np0005534696 python3.9[35553]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 04:26:04 np0005534696 python3.9[35705]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 04:26:05 np0005534696 python3.9[35858]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:26:06 np0005534696 python3.9[36010]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 04:26:07 np0005534696 python3.9[36162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:26:08 np0005534696 python3.9[36314]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:26:08 np0005534696 python3.9[36437]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062768.0874803-669-129531921162464/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:26:12 np0005534696 python3.9[36589]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:26:13 np0005534696 python3.9[36741]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:26:13 np0005534696 python3.9[36894]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:26:14 np0005534696 python3.9[37046]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 04:26:15 np0005534696 python3.9[37199]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 04:26:15 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:26:16 np0005534696 python3.9[37358]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 04:26:16 np0005534696 python3.9[37518]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 04:26:17 np0005534696 python3.9[37671]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 04:26:17 np0005534696 python3.9[37829]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 04:26:18 np0005534696 python3.9[37981]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:26:20 np0005534696 python3.9[38134]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:26:20 np0005534696 python3.9[38286]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:26:21 np0005534696 python3.9[38409]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062780.5003965-1026-88211337417726/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:26:22 np0005534696 python3.9[38561]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:26:22 np0005534696 systemd[1]: Starting Load Kernel Modules...
Nov 25 04:26:22 np0005534696 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 25 04:26:22 np0005534696 kernel: Bridge firewalling registered
Nov 25 04:26:22 np0005534696 systemd-modules-load[38565]: Inserted module 'br_netfilter'
Nov 25 04:26:22 np0005534696 systemd[1]: Finished Load Kernel Modules.
Nov 25 04:26:22 np0005534696 python3.9[38721]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:26:22 np0005534696 python3.9[38844]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062782.2711806-1095-181596722699887/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:26:23 np0005534696 python3.9[38996]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:26:29 np0005534696 dbus-broker-launch[725]: Noticed file-system modification, trigger reload.
Nov 25 04:26:29 np0005534696 dbus-broker-launch[725]: Noticed file-system modification, trigger reload.
Nov 25 04:26:30 np0005534696 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 04:26:30 np0005534696 systemd[1]: Starting man-db-cache-update.service...
Nov 25 04:26:30 np0005534696 systemd[1]: Reloading.
Nov 25 04:26:30 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:26:30 np0005534696 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 04:26:31 np0005534696 python3.9[40719]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:26:31 np0005534696 python3.9[41740]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 04:26:32 np0005534696 python3.9[42609]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:26:32 np0005534696 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 04:26:32 np0005534696 systemd[1]: Finished man-db-cache-update.service.
Nov 25 04:26:32 np0005534696 systemd[1]: man-db-cache-update.service: Consumed 3.178s CPU time.
Nov 25 04:26:32 np0005534696 systemd[1]: run-r6828c7bc5413470aa7344aca19fe740b.service: Deactivated successfully.
Nov 25 04:26:33 np0005534696 python3.9[43166]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:26:33 np0005534696 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 04:26:33 np0005534696 systemd[1]: Starting Authorization Manager...
Nov 25 04:26:33 np0005534696 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 04:26:33 np0005534696 polkitd[43383]: Started polkitd version 0.117
Nov 25 04:26:33 np0005534696 systemd[1]: Started Authorization Manager.
Nov 25 04:26:34 np0005534696 python3.9[43549]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:26:34 np0005534696 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 04:26:34 np0005534696 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 04:26:34 np0005534696 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 04:26:34 np0005534696 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 04:26:34 np0005534696 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 04:26:35 np0005534696 python3.9[43711]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 04:26:38 np0005534696 python3.9[43863]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:26:38 np0005534696 systemd[1]: Reloading.
Nov 25 04:26:38 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:26:39 np0005534696 python3.9[44052]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:26:39 np0005534696 systemd[1]: Reloading.
Nov 25 04:26:39 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:26:39 np0005534696 python3.9[44242]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:26:40 np0005534696 python3.9[44395]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:26:40 np0005534696 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 25 04:26:40 np0005534696 python3.9[44548]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:26:42 np0005534696 python3.9[44710]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:26:43 np0005534696 python3.9[44863]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:26:43 np0005534696 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 04:26:43 np0005534696 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 04:26:43 np0005534696 systemd[1]: Stopping Apply Kernel Variables...
Nov 25 04:26:43 np0005534696 systemd[1]: Starting Apply Kernel Variables...
Nov 25 04:26:43 np0005534696 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 04:26:43 np0005534696 systemd[1]: Finished Apply Kernel Variables.
Nov 25 04:26:43 np0005534696 systemd[1]: session-9.scope: Deactivated successfully.
Nov 25 04:26:43 np0005534696 systemd[1]: session-9.scope: Consumed 1min 36.891s CPU time.
Nov 25 04:26:43 np0005534696 systemd-logind[744]: Session 9 logged out. Waiting for processes to exit.
Nov 25 04:26:43 np0005534696 systemd-logind[744]: Removed session 9.
Nov 25 04:26:49 np0005534696 systemd-logind[744]: New session 10 of user zuul.
Nov 25 04:26:49 np0005534696 systemd[1]: Started Session 10 of User zuul.
Nov 25 04:26:50 np0005534696 python3.9[45046]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:26:51 np0005534696 python3.9[45202]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 04:26:51 np0005534696 python3.9[45355]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 04:26:52 np0005534696 python3.9[45513]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 04:26:53 np0005534696 python3.9[45673]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:26:54 np0005534696 python3.9[45757]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 04:27:32 np0005534696 python3.9[45922]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:27:41 np0005534696 kernel: SELinux:  Converting 2728 SID table entries...
Nov 25 04:27:41 np0005534696 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 04:27:41 np0005534696 kernel: SELinux:  policy capability open_perms=1
Nov 25 04:27:41 np0005534696 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 04:27:41 np0005534696 kernel: SELinux:  policy capability always_check_network=0
Nov 25 04:27:41 np0005534696 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 04:27:41 np0005534696 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 04:27:41 np0005534696 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 04:27:41 np0005534696 dbus-broker-launch[731]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 25 04:27:41 np0005534696 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 25 04:27:41 np0005534696 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 04:27:42 np0005534696 systemd[1]: Starting man-db-cache-update.service...
Nov 25 04:27:42 np0005534696 systemd[1]: Reloading.
Nov 25 04:27:42 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:27:42 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:27:42 np0005534696 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 04:27:42 np0005534696 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 04:27:42 np0005534696 systemd[1]: Finished man-db-cache-update.service.
Nov 25 04:27:42 np0005534696 systemd[1]: run-r10aeab8749b346fba57cd212e52202c8.service: Deactivated successfully.
Nov 25 04:27:43 np0005534696 python3.9[47020]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 04:27:43 np0005534696 systemd[1]: Reloading.
Nov 25 04:27:43 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:27:43 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:27:43 np0005534696 systemd[1]: Starting Open vSwitch Database Unit...
Nov 25 04:27:43 np0005534696 chown[47062]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 25 04:27:43 np0005534696 ovs-ctl[47067]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 25 04:27:43 np0005534696 ovs-ctl[47067]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 25 04:27:43 np0005534696 ovs-ctl[47067]: Starting ovsdb-server [  OK  ]
Nov 25 04:27:43 np0005534696 ovs-vsctl[47116]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 25 04:27:43 np0005534696 ovs-vsctl[47136]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"f116e443-3007-4d69-b0d6-1b58bbc026ea\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 25 04:27:43 np0005534696 ovs-ctl[47067]: Configuring Open vSwitch system IDs [  OK  ]
Nov 25 04:27:43 np0005534696 ovs-ctl[47067]: Enabling remote OVSDB managers [  OK  ]
Nov 25 04:27:43 np0005534696 systemd[1]: Started Open vSwitch Database Unit.
Nov 25 04:27:43 np0005534696 ovs-vsctl[47142]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Nov 25 04:27:43 np0005534696 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 25 04:27:43 np0005534696 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 25 04:27:43 np0005534696 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 25 04:27:43 np0005534696 kernel: openvswitch: Open vSwitch switching datapath
Nov 25 04:27:43 np0005534696 ovs-ctl[47187]: Inserting openvswitch module [  OK  ]
Nov 25 04:27:43 np0005534696 ovs-ctl[47156]: Starting ovs-vswitchd [  OK  ]
Nov 25 04:27:43 np0005534696 ovs-ctl[47156]: Enabling remote OVSDB managers [  OK  ]
Nov 25 04:27:43 np0005534696 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 25 04:27:43 np0005534696 ovs-vsctl[47205]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Nov 25 04:27:43 np0005534696 systemd[1]: Starting Open vSwitch...
Nov 25 04:27:43 np0005534696 systemd[1]: Finished Open vSwitch.
Nov 25 04:27:44 np0005534696 python3.9[47356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:27:45 np0005534696 python3.9[47508]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 04:27:46 np0005534696 kernel: SELinux:  Converting 2742 SID table entries...
Nov 25 04:27:46 np0005534696 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 04:27:46 np0005534696 kernel: SELinux:  policy capability open_perms=1
Nov 25 04:27:46 np0005534696 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 04:27:46 np0005534696 kernel: SELinux:  policy capability always_check_network=0
Nov 25 04:27:46 np0005534696 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 04:27:46 np0005534696 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 04:27:46 np0005534696 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 04:27:46 np0005534696 python3.9[47663]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:27:47 np0005534696 dbus-broker-launch[731]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 25 04:27:47 np0005534696 python3.9[47821]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:27:49 np0005534696 python3.9[47974]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:27:50 np0005534696 python3.9[48261]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 04:27:50 np0005534696 python3.9[48411]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:27:51 np0005534696 python3.9[48565]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:27:54 np0005534696 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 04:27:54 np0005534696 systemd[1]: Starting man-db-cache-update.service...
Nov 25 04:27:54 np0005534696 systemd[1]: Reloading.
Nov 25 04:27:54 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:27:54 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:27:54 np0005534696 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 04:27:54 np0005534696 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 04:27:54 np0005534696 systemd[1]: Finished man-db-cache-update.service.
Nov 25 04:27:54 np0005534696 systemd[1]: run-r4fc7c95447f542f090913c0e2032adf3.service: Deactivated successfully.
Nov 25 04:27:55 np0005534696 python3.9[48882]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:27:55 np0005534696 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 04:27:55 np0005534696 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 04:27:55 np0005534696 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 04:27:55 np0005534696 systemd[1]: Stopping Network Manager...
Nov 25 04:27:55 np0005534696 NetworkManager[7265]: <info>  [1764062875.2265] caught SIGTERM, shutting down normally.
Nov 25 04:27:55 np0005534696 NetworkManager[7265]: <info>  [1764062875.2274] dhcp4 (eth0): canceled DHCP transaction
Nov 25 04:27:55 np0005534696 NetworkManager[7265]: <info>  [1764062875.2274] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:27:55 np0005534696 NetworkManager[7265]: <info>  [1764062875.2274] dhcp4 (eth0): state changed no lease
Nov 25 04:27:55 np0005534696 NetworkManager[7265]: <info>  [1764062875.2275] dhcp6 (eth0): canceled DHCP transaction
Nov 25 04:27:55 np0005534696 NetworkManager[7265]: <info>  [1764062875.2275] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:27:55 np0005534696 NetworkManager[7265]: <info>  [1764062875.2275] dhcp6 (eth0): state changed no lease
Nov 25 04:27:55 np0005534696 NetworkManager[7265]: <info>  [1764062875.2277] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 04:27:55 np0005534696 NetworkManager[7265]: <info>  [1764062875.2301] exiting (success)
Nov 25 04:27:55 np0005534696 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 04:27:55 np0005534696 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 04:27:55 np0005534696 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 04:27:55 np0005534696 systemd[1]: Stopped Network Manager.
Nov 25 04:27:55 np0005534696 systemd[1]: Starting Network Manager...
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.2727] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:64a5e7c0-2269-451f-b570-b292d4bfbc96)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.2729] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.2771] manager[0x558a0b6cc010]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 04:27:55 np0005534696 systemd[1]: Starting Hostname Service...
Nov 25 04:27:55 np0005534696 systemd[1]: Started Hostname Service.
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3332] hostname: hostname: using hostnamed
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3332] hostname: static hostname changed from (none) to "compute-2"
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3335] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3338] manager[0x558a0b6cc010]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3338] manager[0x558a0b6cc010]: rfkill: WWAN hardware radio set enabled
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3354] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3361] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3361] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3361] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3362] manager: Networking is enabled by state file
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3363] settings: Loaded settings plugin: keyfile (internal)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3366] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3386] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3392] dhcp: init: Using DHCP client 'internal'
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3394] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3398] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3402] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3408] device (lo): Activation: starting connection 'lo' (82e74c3e-4080-479b-a643-546a3994c15a)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3412] device (eth0): carrier: link connected
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3416] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3419] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3419] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3424] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3428] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3432] device (eth1): carrier: link connected
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3435] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3439] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (127bddd6-bc23-50ff-8d29-a76aaf0a900f) (indicated)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3439] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3443] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3450] device (eth1): Activation: starting connection 'ci-private-network' (127bddd6-bc23-50ff-8d29-a76aaf0a900f)
Nov 25 04:27:55 np0005534696 systemd[1]: Started Network Manager.
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3454] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3463] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3466] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3467] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3469] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3472] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3473] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3476] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3479] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3482] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3484] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3487] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3492] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3495] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3500] dhcp4 (eth0): state changed new lease, address=192.168.26.176
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3509] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3534] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3539] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3541] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3543] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3547] device (lo): Activation: successful, device activated.
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3551] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3553] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 25 04:27:55 np0005534696 NetworkManager[48892]: <info>  [1764062875.3555] device (eth1): Activation: successful, device activated.
Nov 25 04:27:55 np0005534696 systemd[1]: Starting Network Manager Wait Online...
Nov 25 04:27:55 np0005534696 python3.9[49091]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:27:56 np0005534696 NetworkManager[48892]: <info>  [1764062876.4252] dhcp6 (eth0): state changed new lease, address=2001:db8::2d4
Nov 25 04:27:56 np0005534696 NetworkManager[48892]: <info>  [1764062876.4260] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 04:27:56 np0005534696 NetworkManager[48892]: <info>  [1764062876.4282] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 04:27:56 np0005534696 NetworkManager[48892]: <info>  [1764062876.4283] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 04:27:56 np0005534696 NetworkManager[48892]: <info>  [1764062876.4286] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 04:27:56 np0005534696 NetworkManager[48892]: <info>  [1764062876.4288] device (eth0): Activation: successful, device activated.
Nov 25 04:27:56 np0005534696 NetworkManager[48892]: <info>  [1764062876.4292] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 04:27:56 np0005534696 NetworkManager[48892]: <info>  [1764062876.4294] manager: startup complete
Nov 25 04:27:56 np0005534696 systemd[1]: Finished Network Manager Wait Online.
Nov 25 04:28:02 np0005534696 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 04:28:02 np0005534696 systemd[1]: Starting man-db-cache-update.service...
Nov 25 04:28:02 np0005534696 systemd[1]: Reloading.
Nov 25 04:28:03 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:28:03 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:28:03 np0005534696 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 04:28:03 np0005534696 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 04:28:03 np0005534696 systemd[1]: Finished man-db-cache-update.service.
Nov 25 04:28:03 np0005534696 systemd[1]: run-rbb15982f9c534bc48861129ff6c88969.service: Deactivated successfully.
Nov 25 04:28:04 np0005534696 python3.9[49568]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:28:05 np0005534696 python3.9[49720]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:05 np0005534696 python3.9[49874]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:06 np0005534696 python3.9[50026]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:06 np0005534696 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 04:28:06 np0005534696 python3.9[50180]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:07 np0005534696 python3.9[50332]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:07 np0005534696 python3.9[50484]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:28:08 np0005534696 python3.9[50607]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062887.4878452-649-197938279392427/.source _original_basename=.1a3je0si follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:08 np0005534696 python3.9[50759]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:09 np0005534696 python3.9[50911]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 25 04:28:10 np0005534696 python3.9[51063]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:11 np0005534696 python3.9[51490]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 25 04:28:12 np0005534696 ansible-async_wrapper.py[51665]: Invoked with j372811081645 300 /home/zuul/.ansible/tmp/ansible-tmp-1764062892.0687914-847-247946939070689/AnsiballZ_edpm_os_net_config.py _
Nov 25 04:28:12 np0005534696 ansible-async_wrapper.py[51668]: Starting module and watcher
Nov 25 04:28:12 np0005534696 ansible-async_wrapper.py[51668]: Start watching 51669 (300)
Nov 25 04:28:12 np0005534696 ansible-async_wrapper.py[51669]: Start module (51669)
Nov 25 04:28:12 np0005534696 ansible-async_wrapper.py[51665]: Return async_wrapper task started.
Nov 25 04:28:12 np0005534696 python3.9[51670]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 25 04:28:13 np0005534696 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 25 04:28:13 np0005534696 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 25 04:28:13 np0005534696 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 25 04:28:13 np0005534696 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 25 04:28:13 np0005534696 kernel: cfg80211: failed to load regulatory.db
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.3451] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.3470] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.3935] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.3937] audit: op="connection-add" uuid="4cdb2d5e-8f10-427d-9a12-9780c46d67aa" name="br-ex-br" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.3960] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.3961] audit: op="connection-add" uuid="1b98136c-35ee-4383-a117-a0514e1c4640" name="br-ex-port" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.3974] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.3975] audit: op="connection-add" uuid="49abe496-b8aa-4543-afcd-64aa582acf92" name="eth1-port" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.3989] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.3990] audit: op="connection-add" uuid="e43f9a61-ae9b-4ce7-a3b2-c8c08153f430" name="vlan20-port" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4003] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4004] audit: op="connection-add" uuid="a83c1eac-8e4c-4b38-b6f3-94c8261962a3" name="vlan21-port" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4017] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4018] audit: op="connection-add" uuid="d25a8971-34b8-4ea8-b334-b23274946ca3" name="vlan22-port" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4031] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4033] audit: op="connection-add" uuid="1ca48f93-27f2-491c-880e-9e7bd51f259a" name="vlan23-port" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4053] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.dhcp-timeout,ipv6.routes,ipv6.may-fail,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4069] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4070] audit: op="connection-add" uuid="8480608a-58e8-40de-8403-9752542087b0" name="br-ex-if" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4097] audit: op="connection-update" uuid="127bddd6-bc23-50ff-8d29-a76aaf0a900f" name="ci-private-network" args="ovs-interface.type,ipv6.addresses,ipv6.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.method,ipv6.routing-rules,connection.master,connection.controller,connection.timestamp,connection.slave-type,connection.port-type,ipv4.addresses,ipv4.never-default,ipv4.dns,ipv4.method,ipv4.routing-rules,ipv4.routes,ovs-external-ids.data" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4113] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4114] audit: op="connection-add" uuid="c04a40ab-1fab-41be-b7ab-48f57113ffac" name="vlan20-if" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4131] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4132] audit: op="connection-add" uuid="791d8620-6f54-49c8-9085-b3724ea37b70" name="vlan21-if" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4149] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4150] audit: op="connection-add" uuid="8332eb94-ac06-46f3-8bb1-ea3440e173a3" name="vlan22-if" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4167] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4168] audit: op="connection-add" uuid="03af8121-8300-4f88-a3fa-4f8e7699120a" name="vlan23-if" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4181] audit: op="connection-delete" uuid="d6eee9d1-a2ad-35b5-85ea-83b8e5763015" name="Wired connection 1" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4195] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4206] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4211] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (4cdb2d5e-8f10-427d-9a12-9780c46d67aa)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4212] audit: op="connection-activate" uuid="4cdb2d5e-8f10-427d-9a12-9780c46d67aa" name="br-ex-br" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4216] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4225] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4230] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (1b98136c-35ee-4383-a117-a0514e1c4640)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4232] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4240] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4246] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (49abe496-b8aa-4543-afcd-64aa582acf92)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4248] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4256] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4261] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (e43f9a61-ae9b-4ce7-a3b2-c8c08153f430)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4262] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4271] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4276] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (a83c1eac-8e4c-4b38-b6f3-94c8261962a3)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4279] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4287] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4292] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (d25a8971-34b8-4ea8-b334-b23274946ca3)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4294] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4302] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4307] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (1ca48f93-27f2-491c-880e-9e7bd51f259a)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4308] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4313] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4314] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4320] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4328] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4333] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (8480608a-58e8-40de-8403-9752542087b0)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4333] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4338] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4341] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4342] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4343] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4355] device (eth1): disconnecting for new activation request.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4356] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4359] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4362] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4363] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4367] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4373] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4378] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c04a40ab-1fab-41be-b7ab-48f57113ffac)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4379] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4384] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4385] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4387] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4389] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4396] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4402] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (791d8620-6f54-49c8-9085-b3724ea37b70)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4402] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4407] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4409] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4410] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4414] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4420] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4426] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (8332eb94-ac06-46f3-8bb1-ea3440e173a3)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4428] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4432] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4434] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4435] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4439] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:28:14 np0005534696 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4443] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4448] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (03af8121-8300-4f88-a3fa-4f8e7699120a)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4449] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4453] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4457] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4459] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4461] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4474] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.routes,ipv6.may-fail,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4477] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4483] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4486] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4493] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4497] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4501] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4506] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 kernel: ovs-system: entered promiscuous mode
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4517] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4529] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4534] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4538] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4542] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4547] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 systemd-udevd[51677]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4552] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4557] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4559] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4563] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4568] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4572] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4574] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 kernel: Timeout policy base is empty
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4579] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4585] dhcp4 (eth0): canceled DHCP transaction
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4585] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4586] dhcp4 (eth0): state changed no lease
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4586] dhcp6 (eth0): canceled DHCP transaction
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4587] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4587] dhcp6 (eth0): state changed no lease
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4592] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4604] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4611] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51671 uid=0 result="fail" reason="Device is not activated"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4618] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4631] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4635] dhcp4 (eth0): state changed new lease, address=192.168.26.176
Nov 25 04:28:14 np0005534696 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4649] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4669] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4697] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4806] device (eth1): Activation: starting connection 'ci-private-network' (127bddd6-bc23-50ff-8d29-a76aaf0a900f)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4810] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4822] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4834] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4849] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 kernel: br-ex: entered promiscuous mode
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4854] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4860] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4861] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4862] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4863] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4865] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4871] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4873] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4875] device (eth1): released from controller device eth1
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4892] device (eth1): disconnecting for new activation request.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4892] audit: op="connection-activate" uuid="127bddd6-bc23-50ff-8d29-a76aaf0a900f" name="ci-private-network" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4898] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4902] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4925] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4928] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4932] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4937] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4940] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4945] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4951] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4964] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4969] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4974] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 kernel: vlan22: entered promiscuous mode
Nov 25 04:28:14 np0005534696 systemd-udevd[51675]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4994] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51671 uid=0 result="success"
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.4994] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 04:28:14 np0005534696 kernel: vlan21: entered promiscuous mode
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5082] device (eth1): Activation: starting connection 'ci-private-network' (127bddd6-bc23-50ff-8d29-a76aaf0a900f)
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5092] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5094] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5103] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5112] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5114] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 kernel: vlan23: entered promiscuous mode
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5144] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5148] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5168] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5189] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5211] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 kernel: vlan20: entered promiscuous mode
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5232] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5251] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5260] device (eth1): Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5303] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5305] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5310] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5319] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5334] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5344] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5351] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5364] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5390] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5401] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5407] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5412] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5419] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5427] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5463] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5464] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5469] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5476] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5477] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 04:28:14 np0005534696 NetworkManager[48892]: <info>  [1764062894.5482] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 04:28:15 np0005534696 NetworkManager[48892]: <info>  [1764062895.6724] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51671 uid=0 result="success"
Nov 25 04:28:15 np0005534696 NetworkManager[48892]: <info>  [1764062895.7973] checkpoint[0x558a0b6a4950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 25 04:28:15 np0005534696 NetworkManager[48892]: <info>  [1764062895.7975] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51671 uid=0 result="success"
Nov 25 04:28:15 np0005534696 NetworkManager[48892]: <info>  [1764062895.9327] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51671 uid=0 result="success"
Nov 25 04:28:15 np0005534696 NetworkManager[48892]: <info>  [1764062895.9341] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51671 uid=0 result="success"
Nov 25 04:28:16 np0005534696 NetworkManager[48892]: <info>  [1764062896.1271] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51671 uid=0 result="success"
Nov 25 04:28:16 np0005534696 NetworkManager[48892]: <info>  [1764062896.2579] checkpoint[0x558a0b6a4a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 25 04:28:16 np0005534696 NetworkManager[48892]: <info>  [1764062896.2585] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51671 uid=0 result="success"
Nov 25 04:28:16 np0005534696 python3.9[52029]: ansible-ansible.legacy.async_status Invoked with jid=j372811081645.51665 mode=status _async_dir=/root/.ansible_async
Nov 25 04:28:16 np0005534696 NetworkManager[48892]: <info>  [1764062896.4985] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51671 uid=0 result="success"
Nov 25 04:28:16 np0005534696 NetworkManager[48892]: <info>  [1764062896.4996] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51671 uid=0 result="success"
Nov 25 04:28:16 np0005534696 NetworkManager[48892]: <info>  [1764062896.6710] audit: op="networking-control" arg="global-dns-configuration" pid=51671 uid=0 result="success"
Nov 25 04:28:16 np0005534696 NetworkManager[48892]: <info>  [1764062896.6725] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Nov 25 04:28:16 np0005534696 NetworkManager[48892]: <info>  [1764062896.6730] audit: op="networking-control" arg="global-dns-configuration" pid=51671 uid=0 result="success"
Nov 25 04:28:16 np0005534696 NetworkManager[48892]: <info>  [1764062896.6751] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51671 uid=0 result="success"
Nov 25 04:28:16 np0005534696 NetworkManager[48892]: <info>  [1764062896.7764] checkpoint[0x558a0b6a4af0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Nov 25 04:28:16 np0005534696 NetworkManager[48892]: <info>  [1764062896.7767] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=51671 uid=0 result="success"
Nov 25 04:28:16 np0005534696 ansible-async_wrapper.py[51669]: Module complete (51669)
Nov 25 04:28:17 np0005534696 ansible-async_wrapper.py[51668]: Done in kid B.
Nov 25 04:28:19 np0005534696 python3.9[52135]: ansible-ansible.legacy.async_status Invoked with jid=j372811081645.51665 mode=status _async_dir=/root/.ansible_async
Nov 25 04:28:20 np0005534696 python3.9[52235]: ansible-ansible.legacy.async_status Invoked with jid=j372811081645.51665 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 04:28:20 np0005534696 python3.9[52387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:28:21 np0005534696 python3.9[52510]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062900.3914196-928-169853807015631/.source.returncode _original_basename=.aalra9ef follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:21 np0005534696 python3.9[52662]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:28:22 np0005534696 python3.9[52785]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062901.3503292-977-39555078819943/.source.cfg _original_basename=.clclaj_f follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:22 np0005534696 python3.9[52937]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:28:22 np0005534696 systemd[1]: Reloading Network Manager...
Nov 25 04:28:22 np0005534696 NetworkManager[48892]: <info>  [1764062902.7260] audit: op="reload" arg="0" pid=52941 uid=0 result="success"
Nov 25 04:28:22 np0005534696 NetworkManager[48892]: <info>  [1764062902.7266] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 25 04:28:22 np0005534696 NetworkManager[48892]: <info>  [1764062902.7267] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 04:28:22 np0005534696 systemd[1]: Reloaded Network Manager.
Nov 25 04:28:23 np0005534696 systemd[1]: session-10.scope: Deactivated successfully.
Nov 25 04:28:23 np0005534696 systemd[1]: session-10.scope: Consumed 34.883s CPU time.
Nov 25 04:28:23 np0005534696 systemd-logind[744]: Session 10 logged out. Waiting for processes to exit.
Nov 25 04:28:23 np0005534696 systemd-logind[744]: Removed session 10.
Nov 25 04:28:25 np0005534696 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 04:28:28 np0005534696 systemd-logind[744]: New session 11 of user zuul.
Nov 25 04:28:28 np0005534696 systemd[1]: Started Session 11 of User zuul.
Nov 25 04:28:29 np0005534696 python3.9[53127]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:28:29 np0005534696 python3.9[53282]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:28:30 np0005534696 python3.9[53475]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:28:31 np0005534696 systemd[1]: session-11.scope: Deactivated successfully.
Nov 25 04:28:31 np0005534696 systemd[1]: session-11.scope: Consumed 1.632s CPU time.
Nov 25 04:28:31 np0005534696 systemd-logind[744]: Session 11 logged out. Waiting for processes to exit.
Nov 25 04:28:31 np0005534696 systemd-logind[744]: Removed session 11.
Nov 25 04:28:32 np0005534696 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 04:28:36 np0005534696 systemd-logind[744]: New session 12 of user zuul.
Nov 25 04:28:36 np0005534696 systemd[1]: Started Session 12 of User zuul.
Nov 25 04:28:37 np0005534696 python3.9[53658]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:28:37 np0005534696 python3.9[53812]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:28:38 np0005534696 python3.9[53968]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:28:39 np0005534696 python3.9[54053]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:28:40 np0005534696 python3.9[54206]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:28:41 np0005534696 python3.9[54401]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:42 np0005534696 python3.9[54554]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:28:42 np0005534696 systemd[1]: var-lib-containers-storage-overlay-compat1707631999-merged.mount: Deactivated successfully.
Nov 25 04:28:42 np0005534696 podman[54555]: 2025-11-25 09:28:42.530991851 +0000 UTC m=+0.025551014 system refresh
Nov 25 04:28:43 np0005534696 python3.9[54716]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:28:43 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:28:43 np0005534696 python3.9[54839]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062922.6992245-199-24578028536778/.source.json follow=False _original_basename=podman_network_config.j2 checksum=3c4b430e6271c6fa76c4accce38e7d6407846258 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:28:44 np0005534696 python3.9[54991]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:28:44 np0005534696 python3.9[55114]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062923.8628726-244-29922341688877/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:28:45 np0005534696 python3.9[55266]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:28:45 np0005534696 python3.9[55419]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:28:46 np0005534696 python3.9[55571]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:28:46 np0005534696 python3.9[55723]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:28:47 np0005534696 python3.9[55875]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:28:49 np0005534696 python3.9[56028]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:28:49 np0005534696 python3.9[56182]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:28:50 np0005534696 python3.9[56334]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:28:50 np0005534696 python3.9[56486]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:28:51 np0005534696 python3.9[56639]: ansible-service_facts Invoked
Nov 25 04:28:51 np0005534696 network[56656]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 04:28:51 np0005534696 network[56657]: 'network-scripts' will be removed from distribution in near future.
Nov 25 04:28:51 np0005534696 network[56658]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 04:28:55 np0005534696 python3.9[57110]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:28:57 np0005534696 python3.9[57263]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 04:28:58 np0005534696 python3.9[57415]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:28:59 np0005534696 python3.9[57540]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062938.5238106-677-51320750969571/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:00 np0005534696 python3.9[57694]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:00 np0005534696 python3.9[57819]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062940.033812-722-127760448909534/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:02 np0005534696 python3.9[57973]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:03 np0005534696 python3.9[58127]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:29:04 np0005534696 python3.9[58211]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:29:05 np0005534696 python3.9[58365]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:29:06 np0005534696 python3.9[58449]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:29:06 np0005534696 chronyd[746]: chronyd exiting
Nov 25 04:29:06 np0005534696 systemd[1]: Stopping NTP client/server...
Nov 25 04:29:06 np0005534696 systemd[1]: chronyd.service: Deactivated successfully.
Nov 25 04:29:06 np0005534696 systemd[1]: Stopped NTP client/server.
Nov 25 04:29:06 np0005534696 systemd[1]: Starting NTP client/server...
Nov 25 04:29:06 np0005534696 chronyd[58458]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 04:29:06 np0005534696 chronyd[58458]: Frequency -9.903 +/- 0.123 ppm read from /var/lib/chrony/drift
Nov 25 04:29:06 np0005534696 chronyd[58458]: Loaded seccomp filter (level 2)
Nov 25 04:29:06 np0005534696 systemd[1]: Started NTP client/server.
Nov 25 04:29:06 np0005534696 systemd[1]: session-12.scope: Deactivated successfully.
Nov 25 04:29:06 np0005534696 systemd[1]: session-12.scope: Consumed 17.997s CPU time.
Nov 25 04:29:06 np0005534696 systemd-logind[744]: Session 12 logged out. Waiting for processes to exit.
Nov 25 04:29:06 np0005534696 systemd-logind[744]: Removed session 12.
Nov 25 04:29:12 np0005534696 systemd-logind[744]: New session 13 of user zuul.
Nov 25 04:29:12 np0005534696 systemd[1]: Started Session 13 of User zuul.
Nov 25 04:29:12 np0005534696 python3.9[58639]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:13 np0005534696 python3.9[58791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:14 np0005534696 python3.9[58914]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062953.0170622-65-168452184953962/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:14 np0005534696 systemd[1]: session-13.scope: Deactivated successfully.
Nov 25 04:29:14 np0005534696 systemd[1]: session-13.scope: Consumed 1.102s CPU time.
Nov 25 04:29:14 np0005534696 systemd-logind[744]: Session 13 logged out. Waiting for processes to exit.
Nov 25 04:29:14 np0005534696 systemd-logind[744]: Removed session 13.
Nov 25 04:29:20 np0005534696 systemd-logind[744]: New session 14 of user zuul.
Nov 25 04:29:20 np0005534696 systemd[1]: Started Session 14 of User zuul.
Nov 25 04:29:21 np0005534696 python3.9[59092]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:29:22 np0005534696 python3.9[59248]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:22 np0005534696 python3.9[59423]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:23 np0005534696 python3.9[59546]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764062962.1992211-85-276023156378468/.source.json _original_basename=.wmen_wsm follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:23 np0005534696 python3.9[59698]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:24 np0005534696 python3.9[59821]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062963.6597064-155-244939946818184/.source _original_basename=.w_aqi75i follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:24 np0005534696 python3.9[59973]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:29:25 np0005534696 python3.9[60125]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:25 np0005534696 python3.9[60248]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062965.0330055-226-154317165362546/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:29:26 np0005534696 python3.9[60400]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:26 np0005534696 python3.9[60523]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764062965.808915-226-136670184499171/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:29:27 np0005534696 python3.9[60675]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:27 np0005534696 python3.9[60827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:27 np0005534696 python3.9[60950]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062967.190741-338-245936966467367/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:28 np0005534696 python3.9[61102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:28 np0005534696 python3.9[61225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062968.0619712-382-211658298327740/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:29 np0005534696 python3.9[61377]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:29:29 np0005534696 systemd[1]: Reloading.
Nov 25 04:29:29 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:29:29 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:29:29 np0005534696 systemd[1]: Reloading.
Nov 25 04:29:29 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:29:29 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:29:29 np0005534696 systemd[1]: Starting EDPM Container Shutdown...
Nov 25 04:29:29 np0005534696 systemd[1]: Finished EDPM Container Shutdown.
Nov 25 04:29:30 np0005534696 python3.9[61604]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:30 np0005534696 python3.9[61727]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062970.1593022-452-229600471945165/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:31 np0005534696 python3.9[61879]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:31 np0005534696 python3.9[62002]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062971.014247-496-40368332839633/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:32 np0005534696 python3.9[62154]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:29:32 np0005534696 systemd[1]: Reloading.
Nov 25 04:29:32 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:29:32 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:29:32 np0005534696 systemd[1]: Reloading.
Nov 25 04:29:32 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:29:32 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:29:32 np0005534696 systemd[1]: Starting Create netns directory...
Nov 25 04:29:32 np0005534696 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 04:29:32 np0005534696 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 04:29:32 np0005534696 systemd[1]: Finished Create netns directory.
Nov 25 04:29:33 np0005534696 python3.9[62381]: ansible-ansible.builtin.service_facts Invoked
Nov 25 04:29:33 np0005534696 network[62398]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 04:29:33 np0005534696 network[62399]: 'network-scripts' will be removed from distribution in near future.
Nov 25 04:29:33 np0005534696 network[62400]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 04:29:35 np0005534696 python3.9[62662]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:29:36 np0005534696 systemd[1]: Reloading.
Nov 25 04:29:36 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:29:36 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:29:36 np0005534696 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 25 04:29:36 np0005534696 iptables.init[62702]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 25 04:29:36 np0005534696 iptables.init[62702]: iptables: Flushing firewall rules: [  OK  ]
Nov 25 04:29:36 np0005534696 systemd[1]: iptables.service: Deactivated successfully.
Nov 25 04:29:36 np0005534696 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 25 04:29:37 np0005534696 python3.9[62898]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:29:37 np0005534696 python3.9[63052]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:29:38 np0005534696 systemd[1]: Reloading.
Nov 25 04:29:38 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:29:38 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:29:38 np0005534696 systemd[1]: Starting Netfilter Tables...
Nov 25 04:29:38 np0005534696 systemd[1]: Finished Netfilter Tables.
Nov 25 04:29:38 np0005534696 python3.9[63244]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:29:39 np0005534696 python3.9[63397]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:40 np0005534696 python3.9[63522]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062979.49465-704-74136467530388/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:40 np0005534696 python3.9[63675]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:29:41 np0005534696 systemd[1]: Reloading OpenSSH server daemon...
Nov 25 04:29:41 np0005534696 systemd[1]: Reloaded OpenSSH server daemon.
Nov 25 04:29:41 np0005534696 python3.9[63831]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:42 np0005534696 python3.9[63983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:42 np0005534696 python3.9[64106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062981.7448635-797-75108361144330/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:43 np0005534696 python3.9[64258]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 04:29:43 np0005534696 systemd[1]: Starting Time & Date Service...
Nov 25 04:29:43 np0005534696 systemd[1]: Started Time & Date Service.
Nov 25 04:29:44 np0005534696 python3.9[64414]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:44 np0005534696 python3.9[64566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:44 np0005534696 python3.9[64689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062984.2552676-902-89243157924038/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:45 np0005534696 python3.9[64841]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:45 np0005534696 python3.9[64964]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764062985.1413798-947-20331298536149/.source.yaml _original_basename=.q530d1f2 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:46 np0005534696 python3.9[65116]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:46 np0005534696 python3.9[65239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062986.1085432-992-15380191064259/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:47 np0005534696 python3.9[65391]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:29:47 np0005534696 python3.9[65544]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:29:48 np0005534696 python3[65697]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 04:29:48 np0005534696 python3.9[65849]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:49 np0005534696 python3.9[65972]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062988.635719-1109-256399953413261/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:49 np0005534696 python3.9[66124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:50 np0005534696 python3.9[66247]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062989.5619664-1154-222639860787137/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:50 np0005534696 python3.9[66399]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:51 np0005534696 python3.9[66522]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062990.50187-1199-185938553972793/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:51 np0005534696 python3.9[66674]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:52 np0005534696 python3.9[66797]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062991.429978-1244-35428007410779/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:52 np0005534696 python3.9[66949]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:29:53 np0005534696 python3.9[67072]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764062992.3696113-1289-88686575806461/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:53 np0005534696 python3.9[67224]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:54 np0005534696 python3.9[67376]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:29:54 np0005534696 python3.9[67535]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:55 np0005534696 python3.9[67688]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:56 np0005534696 python3.9[67840]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:29:56 np0005534696 python3.9[67992]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 04:29:57 np0005534696 python3.9[68145]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 04:29:57 np0005534696 systemd[1]: session-14.scope: Deactivated successfully.
Nov 25 04:29:57 np0005534696 systemd[1]: session-14.scope: Consumed 25.012s CPU time.
Nov 25 04:29:57 np0005534696 systemd-logind[744]: Session 14 logged out. Waiting for processes to exit.
Nov 25 04:29:57 np0005534696 systemd-logind[744]: Removed session 14.
Nov 25 04:30:03 np0005534696 systemd-logind[744]: New session 15 of user zuul.
Nov 25 04:30:03 np0005534696 systemd[1]: Started Session 15 of User zuul.
Nov 25 04:30:04 np0005534696 python3.9[68326]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 04:30:04 np0005534696 python3.9[68478]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:30:05 np0005534696 python3.9[68630]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:30:06 np0005534696 python3.9[68782]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBYH+LEkGk38QCoX+uCPb3zHk7+XCeEWV22HpalqUrYF70U5Myra5/E2/v2kioqGNh5TR9q+A7kNO0JU78Ai+6UBv5aJlbEptu33E5t38qiAv3rpyypYwQ8PdWBl7OCeDcqz0EyYAZEw7rLbCWimqRhYsSXuUND+rRboiuI8DEX229oAgnRmIjyPJTTdKGiM3FTdl9YiSbYNyBykzJ8AugCfme4+hmds+8LJloh2aJjRJCs3/GvxdaGJcjBWAqN3Aurg+gPekKe4fwmOir2+KpqBDQE9YMfiBvraaCMGrDXkAjPdsycsvGMsWckhOgEW5qpTIt+ca5kcrK43ChAH5R/PpHlHnEYqw2o26BLmqIejfmXKRSxmH/Fq9Ldj3DMLJr4NTFBfJAl8wqsUKs6/0jngwOCYz6NLs7GgGZLMYv6wbRVgUpCc4ikQ8f1EDmXTdtqxef+QdmLTgWY1qCqe5lL8BcDDCjOTLJ6bbLUAdubY1z4vb6SFVcamH4SkSCFxs=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHCQQOw3EbtZ2XAFA2gGrEnb7MaEAFwIJjyskket7pD#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFP8ctNKDLqIcODtgMol02WD/NgFM5ja/WeN20e07JH/Mz/Ge/v2/ybsY8LOtiyzixlX47XT8hWBR4IBwS2uvfM=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/QqShzRf5Fxs30q3tSf7IhrByfRVQwrs4CVW/gcd2Sdcp7tmVXVNFpJc8XlgTmWxcSLbFtAv0HgJOJ3p6/+g394nChAIaM55uhK/RLFqBZ/byiFqEjvN2LkEWuUVdvbZM808GhONJnWQtg70nn99jeLP34zkSD7gsU7cykxF7K7VyeBfeSiuOcyTjXvVfXr9TZxCZMrsb4eWFZAZ4QERXITlLcZthwc0kd17QWJWLo8Ssv4Qu0DtCHtqHO07s7Nz/CpSs0TX5jVM+C+2rAMn+aAZ4J25X8di4ABF5tO27d+ePazRlU5PWjb8n6kdy1B/cjHgvajXOoUPb5RjyVx2IgULBXaWsIRO23wp8YqiE1OdTly2+Nr5KiTPvR5yqq9C6aBNzS7YyUQc6Rf2RBAaLQbA36NJLGvPUWC7iYVtWdGoTfcTmzqkD2s3hzZl+zU2xNS0IpwByJsOJVIijtGFh1Y45uujq0WUJNPf1ayrY2Z/TV+iO/1iah3JArjyNiq8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPD1sScOy6Aiq5PZkl3KepHqJnvlMIZW4R0DzMl4b3w#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO/iVb5vehoW1eqrk4jdR3j25kacpoWkaPIq4PHAndTN4lXAEwSRab7iUqXkAAaYvUnrCJ86WUoAYGkII0QB5wA=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSE1VMIuB9MiQ17/QHDRAbfwrBNbTb+wZH1rCqeQvAxcHqZYp6TugJnyWX+nah5oDk8vz2PCIUW2lm/tVgP4Y2JHeaN2uMNgVnz1WtD6lCQORMYi1R+KpBgiAQoZAjAyC5Ugx5LWbDvrwtpt0zi2DEgCr2Zao5DG5UAaIcs7/Rj2LRx3hgA4jJ9xJKHVi5bUZfjIlWxLzVXVYT+dvUNrZoiVMBcaUMZRpU4tJ/76mE2jbqsfHEPFwHZ6ljoIegFbzNYoKYMCPK+DeOs/73xD4r/nzeQOK3IQzMOEEVaUYvceA+EPX4M+MrKfkNrJwf35qTOFJpb368gJsebA9uXjzPfzX/uh1atxLv5SihEzC5fHdiZ3BZ3wLEy0C7lvXyRBZdQx+anEYQnDepM/ThOT4YR2BNSCdRS2OpzeSJDS+o5CS++zCqWM4yI3lufZm8O8JqPEblV518196TSyMlAOzPbjEjrUaYGdljY5S2OzKA4PBJW4hW4RyBtjcZWJBpNlM=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBoG9NSSqw98oHfgpW8u+wJYHDhMiOjIhpCElLIROYdO#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFL1noqwoCl3YzxWiRl0GcsDxYERT1o8e2TvLqUkxWuv8xj0oHuq7+GhcKu7HpiCls71ko7MDcOX4zteG544k4=#012 create=True mode=0644 path=/tmp/ansible.zgcdhzqj state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:30:06 np0005534696 python3.9[68934]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.zgcdhzqj' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:30:07 np0005534696 python3.9[69088]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.zgcdhzqj state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:30:07 np0005534696 systemd[1]: session-15.scope: Deactivated successfully.
Nov 25 04:30:07 np0005534696 systemd[1]: session-15.scope: Consumed 2.307s CPU time.
Nov 25 04:30:07 np0005534696 systemd-logind[744]: Session 15 logged out. Waiting for processes to exit.
Nov 25 04:30:07 np0005534696 systemd-logind[744]: Removed session 15.
Nov 25 04:30:13 np0005534696 systemd-logind[744]: New session 16 of user zuul.
Nov 25 04:30:13 np0005534696 systemd[1]: Started Session 16 of User zuul.
Nov 25 04:30:13 np0005534696 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 04:30:14 np0005534696 python3.9[69268]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:30:15 np0005534696 python3.9[69424]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 04:30:15 np0005534696 python3.9[69578]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:30:16 np0005534696 python3.9[69731]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:30:16 np0005534696 python3.9[69884]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:30:17 np0005534696 python3.9[70038]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:30:18 np0005534696 python3.9[70193]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:30:18 np0005534696 systemd[1]: session-16.scope: Deactivated successfully.
Nov 25 04:30:18 np0005534696 systemd[1]: session-16.scope: Consumed 3.023s CPU time.
Nov 25 04:30:18 np0005534696 systemd-logind[744]: Session 16 logged out. Waiting for processes to exit.
Nov 25 04:30:18 np0005534696 systemd-logind[744]: Removed session 16.
Nov 25 04:30:23 np0005534696 systemd-logind[744]: New session 17 of user zuul.
Nov 25 04:30:23 np0005534696 systemd[1]: Started Session 17 of User zuul.
Nov 25 04:30:24 np0005534696 python3.9[70371]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:30:25 np0005534696 python3.9[70527]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:30:25 np0005534696 python3.9[70611]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 04:30:27 np0005534696 python3.9[70762]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:30:28 np0005534696 python3.9[70913]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 04:30:28 np0005534696 python3.9[71063]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:30:29 np0005534696 python3.9[71213]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:30:29 np0005534696 systemd[1]: session-17.scope: Deactivated successfully.
Nov 25 04:30:29 np0005534696 systemd[1]: session-17.scope: Consumed 4.171s CPU time.
Nov 25 04:30:29 np0005534696 systemd-logind[744]: Session 17 logged out. Waiting for processes to exit.
Nov 25 04:30:29 np0005534696 systemd-logind[744]: Removed session 17.
Nov 25 04:30:36 np0005534696 systemd-logind[744]: New session 18 of user zuul.
Nov 25 04:30:36 np0005534696 systemd[1]: Started Session 18 of User zuul.
Nov 25 04:30:40 np0005534696 python3[71979]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:30:41 np0005534696 python3[72070]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 04:30:42 np0005534696 python3[72097]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 04:30:42 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:30:43 np0005534696 python3[72124]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:30:43 np0005534696 kernel: loop: module loaded
Nov 25 04:30:43 np0005534696 kernel: loop3: detected capacity change from 0 to 41943040
Nov 25 04:30:43 np0005534696 python3[72159]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:30:43 np0005534696 lvm[72162]: PV /dev/loop3 not used.
Nov 25 04:30:43 np0005534696 lvm[72171]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 04:30:43 np0005534696 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 25 04:30:43 np0005534696 lvm[72173]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 25 04:30:43 np0005534696 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 25 04:30:43 np0005534696 python3[72251]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 04:30:44 np0005534696 python3[72324]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063043.657046-37202-153203089315789/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:30:44 np0005534696 python3[72374]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:30:44 np0005534696 systemd[1]: Reloading.
Nov 25 04:30:44 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:30:44 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:30:44 np0005534696 systemd[1]: Starting Ceph OSD losetup...
Nov 25 04:30:44 np0005534696 bash[72415]: /dev/loop3: [64513]:4327758 (/var/lib/ceph-osd-0.img)
Nov 25 04:30:44 np0005534696 systemd[1]: Finished Ceph OSD losetup.
Nov 25 04:30:44 np0005534696 lvm[72416]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 04:30:44 np0005534696 lvm[72416]: VG ceph_vg0 finished
Nov 25 04:30:46 np0005534696 python3[72440]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:31:16 np0005534696 chronyd[58458]: Selected source 23.186.168.128 (pool.ntp.org)
Nov 25 04:31:54 np0005534696 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 04:31:54 np0005534696 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 04:31:54 np0005534696 systemd-logind[744]: New session 19 of user ceph-admin.
Nov 25 04:31:54 np0005534696 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 04:31:55 np0005534696 systemd[1]: Starting User Manager for UID 42477...
Nov 25 04:31:55 np0005534696 systemd[72488]: Queued start job for default target Main User Target.
Nov 25 04:31:55 np0005534696 systemd[72488]: Created slice User Application Slice.
Nov 25 04:31:55 np0005534696 systemd[72488]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 04:31:55 np0005534696 systemd[72488]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 04:31:55 np0005534696 systemd[72488]: Reached target Paths.
Nov 25 04:31:55 np0005534696 systemd[72488]: Reached target Timers.
Nov 25 04:31:55 np0005534696 systemd[72488]: Starting D-Bus User Message Bus Socket...
Nov 25 04:31:55 np0005534696 systemd[72488]: Starting Create User's Volatile Files and Directories...
Nov 25 04:31:55 np0005534696 systemd[72488]: Finished Create User's Volatile Files and Directories.
Nov 25 04:31:55 np0005534696 systemd[72488]: Listening on D-Bus User Message Bus Socket.
Nov 25 04:31:55 np0005534696 systemd[72488]: Reached target Sockets.
Nov 25 04:31:55 np0005534696 systemd[72488]: Reached target Basic System.
Nov 25 04:31:55 np0005534696 systemd[1]: Started User Manager for UID 42477.
Nov 25 04:31:55 np0005534696 systemd[72488]: Reached target Main User Target.
Nov 25 04:31:55 np0005534696 systemd[72488]: Startup finished in 82ms.
Nov 25 04:31:55 np0005534696 systemd[1]: Started Session 19 of User ceph-admin.
Nov 25 04:31:55 np0005534696 systemd-logind[744]: New session 21 of user ceph-admin.
Nov 25 04:31:55 np0005534696 systemd[1]: Started Session 21 of User ceph-admin.
Nov 25 04:31:55 np0005534696 systemd-logind[744]: New session 22 of user ceph-admin.
Nov 25 04:31:55 np0005534696 systemd[1]: Started Session 22 of User ceph-admin.
Nov 25 04:31:55 np0005534696 systemd-logind[744]: New session 23 of user ceph-admin.
Nov 25 04:31:55 np0005534696 systemd[1]: Started Session 23 of User ceph-admin.
Nov 25 04:31:55 np0005534696 systemd-logind[744]: New session 24 of user ceph-admin.
Nov 25 04:31:55 np0005534696 systemd[1]: Started Session 24 of User ceph-admin.
Nov 25 04:31:56 np0005534696 systemd-logind[744]: New session 25 of user ceph-admin.
Nov 25 04:31:56 np0005534696 systemd[1]: Started Session 25 of User ceph-admin.
Nov 25 04:31:56 np0005534696 systemd-logind[744]: New session 26 of user ceph-admin.
Nov 25 04:31:56 np0005534696 systemd[1]: Started Session 26 of User ceph-admin.
Nov 25 04:31:56 np0005534696 systemd-logind[744]: New session 27 of user ceph-admin.
Nov 25 04:31:56 np0005534696 systemd[1]: Started Session 27 of User ceph-admin.
Nov 25 04:31:56 np0005534696 systemd-logind[744]: New session 28 of user ceph-admin.
Nov 25 04:31:56 np0005534696 systemd[1]: Started Session 28 of User ceph-admin.
Nov 25 04:31:57 np0005534696 systemd-logind[744]: New session 29 of user ceph-admin.
Nov 25 04:31:57 np0005534696 systemd[1]: Started Session 29 of User ceph-admin.
Nov 25 04:31:57 np0005534696 systemd-logind[744]: New session 30 of user ceph-admin.
Nov 25 04:31:58 np0005534696 systemd[1]: Started Session 30 of User ceph-admin.
Nov 25 04:31:58 np0005534696 systemd-logind[744]: New session 31 of user ceph-admin.
Nov 25 04:31:58 np0005534696 systemd[1]: Started Session 31 of User ceph-admin.
Nov 25 04:31:58 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:32:27 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:32:28 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:32:28 np0005534696 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73071 (sysctl)
Nov 25 04:32:28 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:32:28 np0005534696 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 25 04:32:28 np0005534696 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 25 04:32:29 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:32:31 np0005534696 systemd[1]: var-lib-containers-storage-overlay-compat556493674-merged.mount: Deactivated successfully.
Nov 25 04:32:31 np0005534696 systemd[1]: var-lib-containers-storage-overlay-compat556493674-lower\x2dmapped.mount: Deactivated successfully.
Nov 25 04:32:46 np0005534696 podman[73239]: 2025-11-25 09:32:46.645983315 +0000 UTC m=+16.900786076 container create 37046f81e7fd06973527d5dca7c910ac589ccec02300fd17665ed9704053a628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 04:32:46 np0005534696 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2781324527-merged.mount: Deactivated successfully.
Nov 25 04:32:46 np0005534696 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 25 04:32:46 np0005534696 systemd[1]: Started libpod-conmon-37046f81e7fd06973527d5dca7c910ac589ccec02300fd17665ed9704053a628.scope.
Nov 25 04:32:46 np0005534696 podman[73239]: 2025-11-25 09:32:46.63411861 +0000 UTC m=+16.888921381 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:32:46 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:32:46 np0005534696 podman[73239]: 2025-11-25 09:32:46.709983119 +0000 UTC m=+16.964785880 container init 37046f81e7fd06973527d5dca7c910ac589ccec02300fd17665ed9704053a628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_dirac, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 04:32:46 np0005534696 podman[73239]: 2025-11-25 09:32:46.714765924 +0000 UTC m=+16.969568685 container start 37046f81e7fd06973527d5dca7c910ac589ccec02300fd17665ed9704053a628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_dirac, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:32:46 np0005534696 podman[73239]: 2025-11-25 09:32:46.715925095 +0000 UTC m=+16.970727856 container attach 37046f81e7fd06973527d5dca7c910ac589ccec02300fd17665ed9704053a628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Nov 25 04:32:46 np0005534696 wonderful_dirac[73291]: 167 167
Nov 25 04:32:46 np0005534696 systemd[1]: libpod-37046f81e7fd06973527d5dca7c910ac589ccec02300fd17665ed9704053a628.scope: Deactivated successfully.
Nov 25 04:32:46 np0005534696 podman[73296]: 2025-11-25 09:32:46.744434766 +0000 UTC m=+0.017351450 container died 37046f81e7fd06973527d5dca7c910ac589ccec02300fd17665ed9704053a628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:32:46 np0005534696 systemd[1]: var-lib-containers-storage-overlay-b895d1f1ceb706da7e9b868f1aa734fe56db7042573e3b1b456afc882ede98ff-merged.mount: Deactivated successfully.
Nov 25 04:32:46 np0005534696 podman[73296]: 2025-11-25 09:32:46.760169437 +0000 UTC m=+0.033086111 container remove 37046f81e7fd06973527d5dca7c910ac589ccec02300fd17665ed9704053a628 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_dirac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 04:32:46 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:32:46 np0005534696 systemd[1]: libpod-conmon-37046f81e7fd06973527d5dca7c910ac589ccec02300fd17665ed9704053a628.scope: Deactivated successfully.
Nov 25 04:32:46 np0005534696 podman[73314]: 2025-11-25 09:32:46.870601741 +0000 UTC m=+0.026546156 container create d9ce6f0f91a3d9799af119f891df9ffd6a410594ce49d0f734428f3d0f8f5174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 04:32:46 np0005534696 systemd[1]: Started libpod-conmon-d9ce6f0f91a3d9799af119f891df9ffd6a410594ce49d0f734428f3d0f8f5174.scope.
Nov 25 04:32:46 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:32:46 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d20a1bb83d0e93295fd2b9eff1a27bf6278684abfa58a40034a552431cbad2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:46 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d20a1bb83d0e93295fd2b9eff1a27bf6278684abfa58a40034a552431cbad2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:46 np0005534696 podman[73314]: 2025-11-25 09:32:46.920901182 +0000 UTC m=+0.076845607 container init d9ce6f0f91a3d9799af119f891df9ffd6a410594ce49d0f734428f3d0f8f5174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 04:32:46 np0005534696 podman[73314]: 2025-11-25 09:32:46.925840359 +0000 UTC m=+0.081784774 container start d9ce6f0f91a3d9799af119f891df9ffd6a410594ce49d0f734428f3d0f8f5174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:32:46 np0005534696 podman[73314]: 2025-11-25 09:32:46.926953614 +0000 UTC m=+0.082898028 container attach d9ce6f0f91a3d9799af119f891df9ffd6a410594ce49d0f734428f3d0f8f5174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:32:46 np0005534696 podman[73314]: 2025-11-25 09:32:46.858742325 +0000 UTC m=+0.014686750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]: [
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:    {
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:        "available": false,
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:        "being_replaced": false,
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:        "ceph_device_lvm": false,
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:        "lsm_data": {},
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:        "lvs": [],
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:        "path": "/dev/sr0",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:        "rejected_reasons": [
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "Insufficient space (<5GB)",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "Has a FileSystem"
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:        ],
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:        "sys_api": {
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "actuators": null,
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "device_nodes": [
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:                "sr0"
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            ],
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "devname": "sr0",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "human_readable_size": "474.00 KB",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "id_bus": "ata",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "model": "QEMU DVD-ROM",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "nr_requests": "64",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "parent": "/dev/sr0",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "partitions": {},
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "path": "/dev/sr0",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "removable": "1",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "rev": "2.5+",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "ro": "0",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "rotational": "1",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "sas_address": "",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "sas_device_handle": "",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "scheduler_mode": "mq-deadline",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "sectors": 0,
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "sectorsize": "2048",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "size": 485376.0,
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "support_discard": "2048",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "type": "disk",
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:            "vendor": "QEMU"
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:        }
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]:    }
Nov 25 04:32:47 np0005534696 mystifying_cray[73327]: ]
Nov 25 04:32:47 np0005534696 systemd[1]: libpod-d9ce6f0f91a3d9799af119f891df9ffd6a410594ce49d0f734428f3d0f8f5174.scope: Deactivated successfully.
Nov 25 04:32:47 np0005534696 podman[73314]: 2025-11-25 09:32:47.413038018 +0000 UTC m=+0.568982443 container died d9ce6f0f91a3d9799af119f891df9ffd6a410594ce49d0f734428f3d0f8f5174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 04:32:47 np0005534696 podman[73314]: 2025-11-25 09:32:47.431563355 +0000 UTC m=+0.587507770 container remove d9ce6f0f91a3d9799af119f891df9ffd6a410594ce49d0f734428f3d0f8f5174 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 04:32:47 np0005534696 systemd[1]: libpod-conmon-d9ce6f0f91a3d9799af119f891df9ffd6a410594ce49d0f734428f3d0f8f5174.scope: Deactivated successfully.
Nov 25 04:32:49 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:32:49 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:32:49 np0005534696 podman[75185]: 2025-11-25 09:32:49.205539759 +0000 UTC m=+0.025545723 container create d725e6c5814ec6acbeaacfa254edc3fd0aabfde5ff568d92dcc37ea22e396216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:32:49 np0005534696 systemd[1]: Started libpod-conmon-d725e6c5814ec6acbeaacfa254edc3fd0aabfde5ff568d92dcc37ea22e396216.scope.
Nov 25 04:32:49 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:32:49 np0005534696 podman[75185]: 2025-11-25 09:32:49.242729341 +0000 UTC m=+0.062735327 container init d725e6c5814ec6acbeaacfa254edc3fd0aabfde5ff568d92dcc37ea22e396216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:32:49 np0005534696 podman[75185]: 2025-11-25 09:32:49.246739961 +0000 UTC m=+0.066745926 container start d725e6c5814ec6acbeaacfa254edc3fd0aabfde5ff568d92dcc37ea22e396216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 04:32:49 np0005534696 podman[75185]: 2025-11-25 09:32:49.247823029 +0000 UTC m=+0.067828994 container attach d725e6c5814ec6acbeaacfa254edc3fd0aabfde5ff568d92dcc37ea22e396216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hypatia, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 25 04:32:49 np0005534696 jolly_hypatia[75198]: 167 167
Nov 25 04:32:49 np0005534696 systemd[1]: libpod-d725e6c5814ec6acbeaacfa254edc3fd0aabfde5ff568d92dcc37ea22e396216.scope: Deactivated successfully.
Nov 25 04:32:49 np0005534696 podman[75185]: 2025-11-25 09:32:49.249871795 +0000 UTC m=+0.069877761 container died d725e6c5814ec6acbeaacfa254edc3fd0aabfde5ff568d92dcc37ea22e396216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 04:32:49 np0005534696 podman[75185]: 2025-11-25 09:32:49.265378059 +0000 UTC m=+0.085384025 container remove d725e6c5814ec6acbeaacfa254edc3fd0aabfde5ff568d92dcc37ea22e396216 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hypatia, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:32:49 np0005534696 podman[75185]: 2025-11-25 09:32:49.194788447 +0000 UTC m=+0.014794432 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:32:49 np0005534696 systemd[1]: libpod-conmon-d725e6c5814ec6acbeaacfa254edc3fd0aabfde5ff568d92dcc37ea22e396216.scope: Deactivated successfully.
Nov 25 04:32:49 np0005534696 podman[75210]: 2025-11-25 09:32:49.304528425 +0000 UTC m=+0.025094928 container create 370041f055c9e0aa2ec814891fe76ac7a770664919c952335232b35be2412071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hopper, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:32:49 np0005534696 systemd[1]: Started libpod-conmon-370041f055c9e0aa2ec814891fe76ac7a770664919c952335232b35be2412071.scope.
Nov 25 04:32:49 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:32:49 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6985873a7f593a8955d8ea00ae1643b579d6e937e507ead0ae4c6aa11280c3/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:49 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6985873a7f593a8955d8ea00ae1643b579d6e937e507ead0ae4c6aa11280c3/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:49 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6985873a7f593a8955d8ea00ae1643b579d6e937e507ead0ae4c6aa11280c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:49 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6985873a7f593a8955d8ea00ae1643b579d6e937e507ead0ae4c6aa11280c3/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:49 np0005534696 podman[75210]: 2025-11-25 09:32:49.347671576 +0000 UTC m=+0.068238090 container init 370041f055c9e0aa2ec814891fe76ac7a770664919c952335232b35be2412071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:32:49 np0005534696 podman[75210]: 2025-11-25 09:32:49.351887199 +0000 UTC m=+0.072453704 container start 370041f055c9e0aa2ec814891fe76ac7a770664919c952335232b35be2412071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:32:49 np0005534696 podman[75210]: 2025-11-25 09:32:49.35403456 +0000 UTC m=+0.074601064 container attach 370041f055c9e0aa2ec814891fe76ac7a770664919c952335232b35be2412071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:32:49 np0005534696 podman[75210]: 2025-11-25 09:32:49.294675616 +0000 UTC m=+0.015242140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:32:49 np0005534696 systemd[1]: libpod-370041f055c9e0aa2ec814891fe76ac7a770664919c952335232b35be2412071.scope: Deactivated successfully.
Nov 25 04:32:49 np0005534696 podman[75210]: 2025-11-25 09:32:49.400701688 +0000 UTC m=+0.121268193 container died 370041f055c9e0aa2ec814891fe76ac7a770664919c952335232b35be2412071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 25 04:32:49 np0005534696 podman[75210]: 2025-11-25 09:32:49.417197636 +0000 UTC m=+0.137764141 container remove 370041f055c9e0aa2ec814891fe76ac7a770664919c952335232b35be2412071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hopper, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:32:49 np0005534696 systemd[1]: libpod-conmon-370041f055c9e0aa2ec814891fe76ac7a770664919c952335232b35be2412071.scope: Deactivated successfully.
Nov 25 04:32:49 np0005534696 systemd[1]: Reloading.
Nov 25 04:32:49 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:32:49 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:32:49 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:32:49 np0005534696 systemd[1]: Reloading.
Nov 25 04:32:49 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:32:49 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:32:49 np0005534696 systemd[1]: Reached target All Ceph clusters and services.
Nov 25 04:32:49 np0005534696 systemd[1]: Reloading.
Nov 25 04:32:49 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:32:49 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:32:49 np0005534696 systemd[1]: Reached target Ceph cluster af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:32:50 np0005534696 systemd[1]: Reloading.
Nov 25 04:32:50 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:32:50 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:32:50 np0005534696 systemd[1]: Reloading.
Nov 25 04:32:50 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:32:50 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:32:50 np0005534696 systemd[1]: Created slice Slice /system/ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:32:50 np0005534696 systemd[1]: Reached target System Time Set.
Nov 25 04:32:50 np0005534696 systemd[1]: Reached target System Time Synchronized.
Nov 25 04:32:50 np0005534696 systemd[1]: Starting Ceph mon.compute-2 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:32:50 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:32:50 np0005534696 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 04:32:50 np0005534696 podman[75492]: 2025-11-25 09:32:50.577842147 +0000 UTC m=+0.026739769 container create 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 04:32:50 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91c44db41fa0b70a75cd0098e1683b82a82b97f8495b05254c245bcc3c833fac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:50 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91c44db41fa0b70a75cd0098e1683b82a82b97f8495b05254c245bcc3c833fac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:50 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91c44db41fa0b70a75cd0098e1683b82a82b97f8495b05254c245bcc3c833fac/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:50 np0005534696 podman[75492]: 2025-11-25 09:32:50.566829467 +0000 UTC m=+0.015727099 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:32:50 np0005534696 podman[75492]: 2025-11-25 09:32:50.69112632 +0000 UTC m=+0.140023962 container init 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 04:32:50 np0005534696 podman[75492]: 2025-11-25 09:32:50.695755288 +0000 UTC m=+0.144652910 container start 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: pidfile_write: ignore empty --pid-file
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: load: jerasure load: lrc 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: RocksDB version: 7.9.2
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Git sha 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: DB SUMMARY
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: DB Session ID:  IFRO04M9OA18QGXPWOSU
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: CURRENT file:  CURRENT
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-2/store.db dir, Total Num: 0, files: 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-2/store.db: 000004.log size: 511 ; 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                         Options.error_if_exists: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                       Options.create_if_missing: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                                     Options.env: 0x56183df17c20
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                                Options.info_log: 0x56183fd0ba20
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                              Options.statistics: (nil)
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                               Options.use_fsync: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                              Options.db_log_dir: 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                                 Options.wal_dir: 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                    Options.write_buffer_manager: 0x56183fd0f900
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                  Options.unordered_write: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                               Options.row_cache: None
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                              Options.wal_filter: None
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.two_write_queues: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.wal_compression: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.atomic_flush: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.max_background_jobs: 2
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.max_background_compactions: -1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.max_subcompactions: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.max_total_wal_size: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                          Options.max_open_files: -1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:       Options.compaction_readahead_size: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Compression algorithms supported:
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: #011kZSTD supported: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: #011kXpressCompression supported: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: #011kZlibCompression supported: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:           Options.merge_operator: 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:        Options.compaction_filter: None
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56183fd0a5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56183fd2f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:        Options.write_buffer_size: 33554432
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:  Options.max_write_buffer_number: 2
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:          Options.compression: NoCompression
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.num_levels: 7
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b1e79b6b-6b16-46dd-99f0-cc98f5801a7e
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063170726394, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063170727225, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063170727308, "job": 1, "event": "recovery_finished"}
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56183fd30e00
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: DB pointer 0x56183fe3a000
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2 does not exist in monmap, will attempt to join an existing cluster
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.23 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.23 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56183fd2f350#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.64 KB,0.00012219%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: using public_addr v2:192.168.122.102:0/0 -> [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: starting mon.compute-2 rank -1 at public addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] at bind addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-2 fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(???) e0 preinit fsid af1c9ae3-08d7-5547-a53d-2cccf7c6ef90
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).mds e1 new map
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).mds e1 print_map#012e1#012btime 2025-11-25T09:31:16:071954+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e8 e8: 2 total, 2 up, 2 in
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: Adjusting osd_memory_target on compute-1 to  5248M
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: Adjusting osd_memory_target on compute-0 to 128.7M
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: Unable to set osd_memory_target on compute-0 to 134963200: error parsing value: Value '134963200' is below minimum 939524096
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: OSD bench result of 21369.172304 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: OSD bench result of 23848.115549 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: osd.1 [v2:192.168.122.100:6802/1629670021,v1:192.168.122.100:6803/1629670021] boot
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: osd.0 [v2:192.168.122.101:6800/3040262040,v1:192.168.122.101:6801/3040262040] boot
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: Deploying daemon mon.compute-2 on compute-2
Nov 25 04:32:50 np0005534696 ceph-mon[75508]: mon.compute-2@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Nov 25 04:32:51 np0005534696 bash[75492]: 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96
Nov 25 04:32:51 np0005534696 systemd[1]: Started Ceph mon.compute-2 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:32:52 np0005534696 ceph-mon[75508]: mon.compute-2@-1(probing) e2  my rank is now 1 (was -1)
Nov 25 04:32:52 np0005534696 ceph-mon[75508]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 25 04:32:52 np0005534696 ceph-mon[75508]: paxos.1).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 25 04:32:52 np0005534696 ceph-mon[75508]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: mgrc update_daemon_metadata mon.compute-2 metadata {addrs=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-2,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865372,os=Linux}
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: mon.compute-0 calling monitor election
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: mon.compute-2 calling monitor election
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: overall HEALTH_OK
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:32:55 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 04:32:56 np0005534696 ceph-mon[75508]: Deploying daemon mon.compute-1 on compute-1
Nov 25 04:32:57 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 25 04:32:57 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 25 04:32:57 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 25 04:32:57 np0005534696 ceph-mon[75508]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Nov 25 04:32:57 np0005534696 ceph-mon[75508]: paxos.1).electionLogic(10) init, last seen epoch 10
Nov 25 04:32:57 np0005534696 ceph-mon[75508]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 04:32:57 np0005534696 podman[75630]: 2025-11-25 09:32:57.440916044 +0000 UTC m=+0.027047445 container create 2b8632ea274b447598e7537f276d936bfab196b3f63c8f4e2e0f2fa2f67a8eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dirac, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:32:57 np0005534696 systemd[1]: Started libpod-conmon-2b8632ea274b447598e7537f276d936bfab196b3f63c8f4e2e0f2fa2f67a8eea.scope.
Nov 25 04:32:57 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:32:57 np0005534696 podman[75630]: 2025-11-25 09:32:57.494379969 +0000 UTC m=+0.080511390 container init 2b8632ea274b447598e7537f276d936bfab196b3f63c8f4e2e0f2fa2f67a8eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 25 04:32:57 np0005534696 podman[75630]: 2025-11-25 09:32:57.498472983 +0000 UTC m=+0.084604384 container start 2b8632ea274b447598e7537f276d936bfab196b3f63c8f4e2e0f2fa2f67a8eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 04:32:57 np0005534696 podman[75630]: 2025-11-25 09:32:57.499563185 +0000 UTC m=+0.085694605 container attach 2b8632ea274b447598e7537f276d936bfab196b3f63c8f4e2e0f2fa2f67a8eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dirac, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 04:32:57 np0005534696 charming_dirac[75644]: 167 167
Nov 25 04:32:57 np0005534696 systemd[1]: libpod-2b8632ea274b447598e7537f276d936bfab196b3f63c8f4e2e0f2fa2f67a8eea.scope: Deactivated successfully.
Nov 25 04:32:57 np0005534696 conmon[75644]: conmon 2b8632ea274b447598e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b8632ea274b447598e7537f276d936bfab196b3f63c8f4e2e0f2fa2f67a8eea.scope/container/memory.events
Nov 25 04:32:57 np0005534696 podman[75630]: 2025-11-25 09:32:57.502667437 +0000 UTC m=+0.088798837 container died 2b8632ea274b447598e7537f276d936bfab196b3f63c8f4e2e0f2fa2f67a8eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:32:57 np0005534696 systemd[1]: var-lib-containers-storage-overlay-8f28a939ab92994bd38d7f6119e93495ced754476745200cc4bcc91b504ed144-merged.mount: Deactivated successfully.
Nov 25 04:32:57 np0005534696 podman[75630]: 2025-11-25 09:32:57.526427295 +0000 UTC m=+0.112558695 container remove 2b8632ea274b447598e7537f276d936bfab196b3f63c8f4e2e0f2fa2f67a8eea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:32:57 np0005534696 podman[75630]: 2025-11-25 09:32:57.42905731 +0000 UTC m=+0.015188730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:32:57 np0005534696 systemd[1]: libpod-conmon-2b8632ea274b447598e7537f276d936bfab196b3f63c8f4e2e0f2fa2f67a8eea.scope: Deactivated successfully.
Nov 25 04:32:57 np0005534696 systemd[1]: Reloading.
Nov 25 04:32:57 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:32:57 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:32:57 np0005534696 systemd[1]: Reloading.
Nov 25 04:32:57 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:32:57 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:32:57 np0005534696 systemd[1]: Starting Ceph mgr.compute-2.flybft for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:32:58 np0005534696 podman[75776]: 2025-11-25 09:32:58.087341921 +0000 UTC m=+0.026837080 container create ebeb76731ed56cf9c8eaff990abcdd2f7ee6f0a5170e638140fa332e4abaf92d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:32:58 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d1d7dc4011b66979a89fad0a0f559b6b848014aaec376f3067120a34ddcc82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:58 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d1d7dc4011b66979a89fad0a0f559b6b848014aaec376f3067120a34ddcc82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:58 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d1d7dc4011b66979a89fad0a0f559b6b848014aaec376f3067120a34ddcc82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:58 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6d1d7dc4011b66979a89fad0a0f559b6b848014aaec376f3067120a34ddcc82/merged/var/lib/ceph/mgr/ceph-compute-2.flybft supports timestamps until 2038 (0x7fffffff)
Nov 25 04:32:58 np0005534696 podman[75776]: 2025-11-25 09:32:58.13378941 +0000 UTC m=+0.073284569 container init ebeb76731ed56cf9c8eaff990abcdd2f7ee6f0a5170e638140fa332e4abaf92d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:32:58 np0005534696 podman[75776]: 2025-11-25 09:32:58.137540594 +0000 UTC m=+0.077035742 container start ebeb76731ed56cf9c8eaff990abcdd2f7ee6f0a5170e638140fa332e4abaf92d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:32:58 np0005534696 bash[75776]: ebeb76731ed56cf9c8eaff990abcdd2f7ee6f0a5170e638140fa332e4abaf92d
Nov 25 04:32:58 np0005534696 podman[75776]: 2025-11-25 09:32:58.076083732 +0000 UTC m=+0.015578900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:32:58 np0005534696 systemd[1]: Started Ceph mgr.compute-2.flybft for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:32:58 np0005534696 ceph-mon[75508]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 25 04:32:58 np0005534696 ceph-mon[75508]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 25 04:32:58 np0005534696 ceph-mon[75508]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 25 04:32:59 np0005534696 ceph-mon[75508]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 25 04:33:01 np0005534696 ceph-mon[75508]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 25 04:33:01 np0005534696 ceph-mon[75508]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 25 04:33:01 np0005534696 ceph-mon[75508]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: Deploying daemon mgr.compute-2.flybft on compute-2
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: mon.compute-0 calling monitor election
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: mon.compute-2 calling monitor election
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: mon.compute-1 calling monitor election
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: overall HEALTH_OK
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.plffrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.plffrn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 25 04:33:02 np0005534696 ceph-mgr[75792]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 04:33:02 np0005534696 ceph-mgr[75792]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 04:33:02 np0005534696 ceph-mgr[75792]: pidfile_write: ignore empty --pid-file
Nov 25 04:33:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 25 04:33:02 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'alerts'
Nov 25 04:33:02 np0005534696 ceph-mgr[75792]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 04:33:02 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'balancer'
Nov 25 04:33:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:02.685+0000 7ff3f06af140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 04:33:02 np0005534696 ceph-mgr[75792]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 04:33:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:02.755+0000 7ff3f06af140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 04:33:02 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'cephadm'
Nov 25 04:33:03 np0005534696 ceph-mon[75508]: Deploying daemon mgr.compute-1.plffrn on compute-1
Nov 25 04:33:03 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 25 04:33:03 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 25 04:33:03 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'crash'
Nov 25 04:33:03 np0005534696 ceph-mgr[75792]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 04:33:03 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'dashboard'
Nov 25 04:33:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:03.431+0000 7ff3f06af140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 04:33:03 np0005534696 podman[75913]: 2025-11-25 09:33:03.662581979 +0000 UTC m=+0.027283668 container create bba888e521a0bfe74fe4b7de5383973ea4eecbc04963fc497fec3eafb3f5f60c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 25 04:33:03 np0005534696 systemd[1]: Started libpod-conmon-bba888e521a0bfe74fe4b7de5383973ea4eecbc04963fc497fec3eafb3f5f60c.scope.
Nov 25 04:33:03 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:03 np0005534696 podman[75913]: 2025-11-25 09:33:03.72097863 +0000 UTC m=+0.085680319 container init bba888e521a0bfe74fe4b7de5383973ea4eecbc04963fc497fec3eafb3f5f60c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 04:33:03 np0005534696 podman[75913]: 2025-11-25 09:33:03.725611885 +0000 UTC m=+0.090313575 container start bba888e521a0bfe74fe4b7de5383973ea4eecbc04963fc497fec3eafb3f5f60c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 04:33:03 np0005534696 podman[75913]: 2025-11-25 09:33:03.727358326 +0000 UTC m=+0.092060014 container attach bba888e521a0bfe74fe4b7de5383973ea4eecbc04963fc497fec3eafb3f5f60c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:03 np0005534696 pensive_gauss[75926]: 167 167
Nov 25 04:33:03 np0005534696 systemd[1]: libpod-bba888e521a0bfe74fe4b7de5383973ea4eecbc04963fc497fec3eafb3f5f60c.scope: Deactivated successfully.
Nov 25 04:33:03 np0005534696 conmon[75926]: conmon bba888e521a0bfe74fe4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bba888e521a0bfe74fe4b7de5383973ea4eecbc04963fc497fec3eafb3f5f60c.scope/container/memory.events
Nov 25 04:33:03 np0005534696 podman[75913]: 2025-11-25 09:33:03.729616604 +0000 UTC m=+0.094318293 container died bba888e521a0bfe74fe4b7de5383973ea4eecbc04963fc497fec3eafb3f5f60c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:33:03 np0005534696 systemd[1]: var-lib-containers-storage-overlay-e45f17ac6564b054eab2b12e1e373f9cb367de8e0ce99e179dd6358f6d31bd1f-merged.mount: Deactivated successfully.
Nov 25 04:33:03 np0005534696 podman[75913]: 2025-11-25 09:33:03.746226675 +0000 UTC m=+0.110928364 container remove bba888e521a0bfe74fe4b7de5383973ea4eecbc04963fc497fec3eafb3f5f60c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gauss, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 25 04:33:03 np0005534696 podman[75913]: 2025-11-25 09:33:03.650590235 +0000 UTC m=+0.015291944 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:03 np0005534696 systemd[1]: libpod-conmon-bba888e521a0bfe74fe4b7de5383973ea4eecbc04963fc497fec3eafb3f5f60c.scope: Deactivated successfully.
Nov 25 04:33:03 np0005534696 systemd[1]: Reloading.
Nov 25 04:33:03 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:33:03 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:33:03 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'devicehealth'
Nov 25 04:33:03 np0005534696 ceph-mgr[75792]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 04:33:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:03.979+0000 7ff3f06af140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 04:33:03 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 04:33:03 np0005534696 systemd[1]: Reloading.
Nov 25 04:33:04 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:33:04 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]:  from numpy import show_config as show_numpy_config
Nov 25 04:33:04 np0005534696 ceph-mgr[75792]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 04:33:04 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'influx'
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:04.126+0000 7ff3f06af140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 04:33:04 np0005534696 systemd[1]: Starting Ceph crash.compute-2 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:33:04 np0005534696 ceph-mgr[75792]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 04:33:04 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'insights'
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:04.188+0000 7ff3f06af140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 04:33:04 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'iostat'
Nov 25 04:33:04 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:04 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:04 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:04 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:04 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 04:33:04 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 25 04:33:04 np0005534696 ceph-mon[75508]: Deploying daemon crash.compute-2 on compute-2
Nov 25 04:33:04 np0005534696 ceph-mgr[75792]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:04.305+0000 7ff3f06af140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 04:33:04 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'k8sevents'
Nov 25 04:33:04 np0005534696 podman[76057]: 2025-11-25 09:33:04.332278799 +0000 UTC m=+0.026959461 container create 75aa60884316418ad129eefc73f5b6e732a4c1361c460e1d9973c744c7782f9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 04:33:04 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a191f463acdf6bbeeca52c4698aa1166b39dbebcbe4f26206a08769e767acc37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:04 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a191f463acdf6bbeeca52c4698aa1166b39dbebcbe4f26206a08769e767acc37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:04 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a191f463acdf6bbeeca52c4698aa1166b39dbebcbe4f26206a08769e767acc37/merged/etc/ceph/ceph.client.crash.compute-2.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:04 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a191f463acdf6bbeeca52c4698aa1166b39dbebcbe4f26206a08769e767acc37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:04 np0005534696 podman[76057]: 2025-11-25 09:33:04.376800832 +0000 UTC m=+0.071481504 container init 75aa60884316418ad129eefc73f5b6e732a4c1361c460e1d9973c744c7782f9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 04:33:04 np0005534696 podman[76057]: 2025-11-25 09:33:04.380305884 +0000 UTC m=+0.074986556 container start 75aa60884316418ad129eefc73f5b6e732a4c1361c460e1d9973c744c7782f9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 25 04:33:04 np0005534696 bash[76057]: 75aa60884316418ad129eefc73f5b6e732a4c1361c460e1d9973c744c7782f9e
Nov 25 04:33:04 np0005534696 podman[76057]: 2025-11-25 09:33:04.32108003 +0000 UTC m=+0.015760702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:04 np0005534696 systemd[1]: Started Ceph crash.compute-2 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: 2025-11-25T09:33:04.506+0000 7f99b0f52640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: 2025-11-25T09:33:04.506+0000 7f99b0f52640 -1 AuthRegistry(0x7f99ac069b10) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: 2025-11-25T09:33:04.509+0000 7f99b0f52640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: 2025-11-25T09:33:04.509+0000 7f99b0f52640 -1 AuthRegistry(0x7f99b0f50ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: 2025-11-25T09:33:04.511+0000 7f99aa575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: 2025-11-25T09:33:04.511+0000 7f99aad76640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: 2025-11-25T09:33:04.511+0000 7f99a9d74640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: 2025-11-25T09:33:04.511+0000 7f99b0f52640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 25 04:33:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 25 04:33:04 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'localpool'
Nov 25 04:33:04 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 04:33:04 np0005534696 podman[76168]: 2025-11-25 09:33:04.836938754 +0000 UTC m=+0.028326600 container create 085bec5dde41e1dbe374fabd31015e208d87ee79b789e84a0814faffb9f05dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:04 np0005534696 systemd[1]: Started libpod-conmon-085bec5dde41e1dbe374fabd31015e208d87ee79b789e84a0814faffb9f05dee.scope.
Nov 25 04:33:04 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'mirroring'
Nov 25 04:33:04 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:04 np0005534696 podman[76168]: 2025-11-25 09:33:04.899510273 +0000 UTC m=+0.090898119 container init 085bec5dde41e1dbe374fabd31015e208d87ee79b789e84a0814faffb9f05dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hofstadter, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 04:33:04 np0005534696 podman[76168]: 2025-11-25 09:33:04.904198281 +0000 UTC m=+0.095586126 container start 085bec5dde41e1dbe374fabd31015e208d87ee79b789e84a0814faffb9f05dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:04 np0005534696 infallible_hofstadter[76181]: 167 167
Nov 25 04:33:04 np0005534696 podman[76168]: 2025-11-25 09:33:04.907715687 +0000 UTC m=+0.099103543 container attach 085bec5dde41e1dbe374fabd31015e208d87ee79b789e84a0814faffb9f05dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hofstadter, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:33:04 np0005534696 systemd[1]: libpod-085bec5dde41e1dbe374fabd31015e208d87ee79b789e84a0814faffb9f05dee.scope: Deactivated successfully.
Nov 25 04:33:04 np0005534696 podman[76168]: 2025-11-25 09:33:04.908733672 +0000 UTC m=+0.100121519 container died 085bec5dde41e1dbe374fabd31015e208d87ee79b789e84a0814faffb9f05dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 04:33:04 np0005534696 systemd[1]: var-lib-containers-storage-overlay-6dfeb9ecab19f5af8dc6bb20301b6f6c145eccebaa88c776f58f8cdf1f4a9781-merged.mount: Deactivated successfully.
Nov 25 04:33:04 np0005534696 podman[76168]: 2025-11-25 09:33:04.825196157 +0000 UTC m=+0.016584023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:04 np0005534696 podman[76168]: 2025-11-25 09:33:04.93020201 +0000 UTC m=+0.121589856 container remove 085bec5dde41e1dbe374fabd31015e208d87ee79b789e84a0814faffb9f05dee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_hofstadter, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 25 04:33:04 np0005534696 systemd[1]: libpod-conmon-085bec5dde41e1dbe374fabd31015e208d87ee79b789e84a0814faffb9f05dee.scope: Deactivated successfully.
Nov 25 04:33:04 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'nfs'
Nov 25 04:33:05 np0005534696 podman[76202]: 2025-11-25 09:33:05.047472857 +0000 UTC m=+0.027878499 container create 19f1ba91a206cdac7e27b4b6a683d822f427591a07d4f5a02aafde0901bfe449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 04:33:05 np0005534696 systemd[1]: Started libpod-conmon-19f1ba91a206cdac7e27b4b6a683d822f427591a07d4f5a02aafde0901bfe449.scope.
Nov 25 04:33:05 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:05 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca8f34217c7e51cf08e54d11d66cfdb2476c24106b6e2021cc1316a420641d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:05 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca8f34217c7e51cf08e54d11d66cfdb2476c24106b6e2021cc1316a420641d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:05 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca8f34217c7e51cf08e54d11d66cfdb2476c24106b6e2021cc1316a420641d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:05 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca8f34217c7e51cf08e54d11d66cfdb2476c24106b6e2021cc1316a420641d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:05 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca8f34217c7e51cf08e54d11d66cfdb2476c24106b6e2021cc1316a420641d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:05 np0005534696 podman[76202]: 2025-11-25 09:33:05.107282555 +0000 UTC m=+0.087688197 container init 19f1ba91a206cdac7e27b4b6a683d822f427591a07d4f5a02aafde0901bfe449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:33:05 np0005534696 podman[76202]: 2025-11-25 09:33:05.112873885 +0000 UTC m=+0.093279526 container start 19f1ba91a206cdac7e27b4b6a683d822f427591a07d4f5a02aafde0901bfe449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:05 np0005534696 podman[76202]: 2025-11-25 09:33:05.119748357 +0000 UTC m=+0.100153998 container attach 19f1ba91a206cdac7e27b4b6a683d822f427591a07d4f5a02aafde0901bfe449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 04:33:05 np0005534696 podman[76202]: 2025-11-25 09:33:05.036225659 +0000 UTC m=+0.016631301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'orchestrator'
Nov 25 04:33:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:05.174+0000 7ff3f06af140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e12 e12: 2 total, 2 up, 2 in
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2513898650' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:05.360+0000 7ff3f06af140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 04:33:05 np0005534696 goofy_fermat[76215]: --> passed data devices: 0 physical, 1 LVM
Nov 25 04:33:05 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:05.426+0000 7ff3f06af140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'osd_support'
Nov 25 04:33:05 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 04:33:05 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 836b14f9-a1aa-4fbf-bd6d-42374c72028e
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:05.483+0000 7ff3f06af140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 25 04:33:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:05.552+0000 7ff3f06af140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'progress'
Nov 25 04:33:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:05.613+0000 7ff3f06af140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'prometheus'
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e13 e13: 3 total, 2 up, 3 in
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e13 _set_new_cache_sizes cache_size:1019938589 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:33:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Nov 25 04:33:05 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 25 04:33:05 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 25 04:33:05 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 04:33:05 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:05 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 25 04:33:05 np0005534696 lvm[76277]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 04:33:05 np0005534696 lvm[76277]: VG ceph_vg0 finished
Nov 25 04:33:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:05.909+0000 7ff3f06af140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rbd_support'
Nov 25 04:33:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:05.995+0000 7ff3f06af140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 04:33:05 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'restful'
Nov 25 04:33:06 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon getmap"} v 0)
Nov 25 04:33:06 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4042166828' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 25 04:33:06 np0005534696 goofy_fermat[76215]: stderr: got monmap epoch 3
Nov 25 04:33:06 np0005534696 goofy_fermat[76215]: --> Creating keyring file for osd.2
Nov 25 04:33:06 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rgw'
Nov 25 04:33:06 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 25 04:33:06 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 25 04:33:06 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 836b14f9-a1aa-4fbf-bd6d-42374c72028e --setuser ceph --setgroup ceph
Nov 25 04:33:06 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2513898650' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 04:33:06 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.102:0/4258346990' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "836b14f9-a1aa-4fbf-bd6d-42374c72028e"}]: dispatch
Nov 25 04:33:06 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.102:0/4258346990' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "836b14f9-a1aa-4fbf-bd6d-42374c72028e"}]': finished
Nov 25 04:33:06 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/3191460124' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 04:33:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:06.373+0000 7ff3f06af140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 04:33:06 np0005534696 ceph-mgr[75792]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 04:33:06 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rook'
Nov 25 04:33:06 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e14 e14: 3 total, 2 up, 3 in
Nov 25 04:33:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:06.848+0000 7ff3f06af140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 04:33:06 np0005534696 ceph-mgr[75792]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 04:33:06 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'selftest'
Nov 25 04:33:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:06.910+0000 7ff3f06af140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 04:33:06 np0005534696 ceph-mgr[75792]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 04:33:06 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'snap_schedule'
Nov 25 04:33:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:06.978+0000 7ff3f06af140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 04:33:06 np0005534696 ceph-mgr[75792]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 04:33:06 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'stats'
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'status'
Nov 25 04:33:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:07.105+0000 7ff3f06af140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'telegraf'
Nov 25 04:33:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:07.166+0000 7ff3f06af140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'telemetry'
Nov 25 04:33:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:07.299+0000 7ff3f06af140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 04:33:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:07.491+0000 7ff3f06af140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'volumes'
Nov 25 04:33:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:07.722+0000 7ff3f06af140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'zabbix'
Nov 25 04:33:07 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e15 e15: 3 total, 2 up, 3 in
Nov 25 04:33:07 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/3191460124' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 04:33:07 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:07 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/4269819066' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 04:33:07 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:07 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:07.783+0000 7ff3f06af140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 04:33:07 np0005534696 ceph-mgr[75792]: ms_deliver_dispatch: unhandled message 0x555de6c48d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 25 04:33:08 np0005534696 goofy_fermat[76215]: stderr: 2025-11-25T09:33:06.231+0000 7fc04347e740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Nov 25 04:33:08 np0005534696 goofy_fermat[76215]: stderr: 2025-11-25T09:33:06.499+0000 7fc04347e740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 25 04:33:08 np0005534696 goofy_fermat[76215]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 25 04:33:08 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 04:33:08 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 25 04:33:08 np0005534696 ceph-mon[75508]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 04:33:08 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/4269819066' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 04:33:08 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2992947115' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 04:33:08 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e16 e16: 3 total, 2 up, 3 in
Nov 25 04:33:09 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:09 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:09 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 04:33:09 np0005534696 goofy_fermat[76215]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 04:33:09 np0005534696 goofy_fermat[76215]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 25 04:33:09 np0005534696 goofy_fermat[76215]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 25 04:33:09 np0005534696 systemd[1]: libpod-19f1ba91a206cdac7e27b4b6a683d822f427591a07d4f5a02aafde0901bfe449.scope: Deactivated successfully.
Nov 25 04:33:09 np0005534696 systemd[1]: libpod-19f1ba91a206cdac7e27b4b6a683d822f427591a07d4f5a02aafde0901bfe449.scope: Consumed 1.423s CPU time.
Nov 25 04:33:09 np0005534696 podman[77199]: 2025-11-25 09:33:09.080317995 +0000 UTC m=+0.018068492 container died 19f1ba91a206cdac7e27b4b6a683d822f427591a07d4f5a02aafde0901bfe449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:33:09 np0005534696 systemd[1]: var-lib-containers-storage-overlay-9ca8f34217c7e51cf08e54d11d66cfdb2476c24106b6e2021cc1316a420641d9-merged.mount: Deactivated successfully.
Nov 25 04:33:09 np0005534696 podman[77199]: 2025-11-25 09:33:09.100567991 +0000 UTC m=+0.038318478 container remove 19f1ba91a206cdac7e27b4b6a683d822f427591a07d4f5a02aafde0901bfe449 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_fermat, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:33:09 np0005534696 systemd[1]: libpod-conmon-19f1ba91a206cdac7e27b4b6a683d822f427591a07d4f5a02aafde0901bfe449.scope: Deactivated successfully.
Nov 25 04:33:09 np0005534696 podman[77290]: 2025-11-25 09:33:09.483268019 +0000 UTC m=+0.026886754 container create 58b6bd974707135fb7518dffed70528c632be44a22463bbb1a94e88839f7b5ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:09 np0005534696 systemd[1]: Started libpod-conmon-58b6bd974707135fb7518dffed70528c632be44a22463bbb1a94e88839f7b5ea.scope.
Nov 25 04:33:09 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:09 np0005534696 podman[77290]: 2025-11-25 09:33:09.530764382 +0000 UTC m=+0.074383136 container init 58b6bd974707135fb7518dffed70528c632be44a22463bbb1a94e88839f7b5ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:09 np0005534696 podman[77290]: 2025-11-25 09:33:09.535412003 +0000 UTC m=+0.079030738 container start 58b6bd974707135fb7518dffed70528c632be44a22463bbb1a94e88839f7b5ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:09 np0005534696 podman[77290]: 2025-11-25 09:33:09.536464994 +0000 UTC m=+0.080083730 container attach 58b6bd974707135fb7518dffed70528c632be44a22463bbb1a94e88839f7b5ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 04:33:09 np0005534696 systemd[1]: libpod-58b6bd974707135fb7518dffed70528c632be44a22463bbb1a94e88839f7b5ea.scope: Deactivated successfully.
Nov 25 04:33:09 np0005534696 reverent_proskuriakova[77304]: 167 167
Nov 25 04:33:09 np0005534696 podman[77290]: 2025-11-25 09:33:09.538429293 +0000 UTC m=+0.082048028 container died 58b6bd974707135fb7518dffed70528c632be44a22463bbb1a94e88839f7b5ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 25 04:33:09 np0005534696 systemd[1]: var-lib-containers-storage-overlay-b3d6f3cf83447d2cf059ec8de4d7d352fbb57b6a06e84daac4837b5feef3a7fc-merged.mount: Deactivated successfully.
Nov 25 04:33:09 np0005534696 podman[77290]: 2025-11-25 09:33:09.55499475 +0000 UTC m=+0.098613485 container remove 58b6bd974707135fb7518dffed70528c632be44a22463bbb1a94e88839f7b5ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_proskuriakova, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 25 04:33:09 np0005534696 podman[77290]: 2025-11-25 09:33:09.472552986 +0000 UTC m=+0.016171741 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:09 np0005534696 systemd[1]: libpod-conmon-58b6bd974707135fb7518dffed70528c632be44a22463bbb1a94e88839f7b5ea.scope: Deactivated successfully.
Nov 25 04:33:09 np0005534696 podman[77325]: 2025-11-25 09:33:09.665580391 +0000 UTC m=+0.027895704 container create 75c2b75b04508207732b9f546dab27e1a60a2451fd075625f0b69e295c5bb018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_darwin, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 04:33:09 np0005534696 systemd[1]: Started libpod-conmon-75c2b75b04508207732b9f546dab27e1a60a2451fd075625f0b69e295c5bb018.scope.
Nov 25 04:33:09 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:09 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc37d52c4aea53b304813a02ff02f3427d9fe8b821237d838f270b4cc01e1160/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:09 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc37d52c4aea53b304813a02ff02f3427d9fe8b821237d838f270b4cc01e1160/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:09 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc37d52c4aea53b304813a02ff02f3427d9fe8b821237d838f270b4cc01e1160/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:09 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc37d52c4aea53b304813a02ff02f3427d9fe8b821237d838f270b4cc01e1160/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:09 np0005534696 podman[77325]: 2025-11-25 09:33:09.719098798 +0000 UTC m=+0.081414112 container init 75c2b75b04508207732b9f546dab27e1a60a2451fd075625f0b69e295c5bb018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_darwin, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:09 np0005534696 podman[77325]: 2025-11-25 09:33:09.723525066 +0000 UTC m=+0.085840379 container start 75c2b75b04508207732b9f546dab27e1a60a2451fd075625f0b69e295c5bb018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:09 np0005534696 podman[77325]: 2025-11-25 09:33:09.724732076 +0000 UTC m=+0.087047389 container attach 75c2b75b04508207732b9f546dab27e1a60a2451fd075625f0b69e295c5bb018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 04:33:09 np0005534696 podman[77325]: 2025-11-25 09:33:09.653127264 +0000 UTC m=+0.015442587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:09 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e17 e17: 3 total, 2 up, 3 in
Nov 25 04:33:09 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2992947115' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 04:33:09 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2175287734' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]: {
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:    "2": [
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:        {
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            "devices": [
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "/dev/loop3"
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            ],
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            "lv_name": "ceph_lv0",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            "lv_size": "21470642176",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=9TwZH5-FtKH-29qo-ZrxL-pWfI-3mQY-azsg3A,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=af1c9ae3-08d7-5547-a53d-2cccf7c6ef90,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=836b14f9-a1aa-4fbf-bd6d-42374c72028e,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            "lv_uuid": "9TwZH5-FtKH-29qo-ZrxL-pWfI-3mQY-azsg3A",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            "name": "ceph_lv0",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            "tags": {
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.block_uuid": "9TwZH5-FtKH-29qo-ZrxL-pWfI-3mQY-azsg3A",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.cephx_lockbox_secret": "",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.cluster_fsid": "af1c9ae3-08d7-5547-a53d-2cccf7c6ef90",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.cluster_name": "ceph",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.crush_device_class": "",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.encrypted": "0",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.osd_fsid": "836b14f9-a1aa-4fbf-bd6d-42374c72028e",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.osd_id": "2",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.type": "block",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.vdo": "0",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:                "ceph.with_tpm": "0"
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            },
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            "type": "block",
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:            "vg_name": "ceph_vg0"
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:        }
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]:    ]
Nov 25 04:33:09 np0005534696 intelligent_darwin[77338]: }
Nov 25 04:33:09 np0005534696 systemd[1]: libpod-75c2b75b04508207732b9f546dab27e1a60a2451fd075625f0b69e295c5bb018.scope: Deactivated successfully.
Nov 25 04:33:09 np0005534696 podman[77347]: 2025-11-25 09:33:09.977692129 +0000 UTC m=+0.016852926 container died 75c2b75b04508207732b9f546dab27e1a60a2451fd075625f0b69e295c5bb018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_darwin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:09 np0005534696 systemd[1]: var-lib-containers-storage-overlay-fc37d52c4aea53b304813a02ff02f3427d9fe8b821237d838f270b4cc01e1160-merged.mount: Deactivated successfully.
Nov 25 04:33:09 np0005534696 podman[77347]: 2025-11-25 09:33:09.995538675 +0000 UTC m=+0.034699452 container remove 75c2b75b04508207732b9f546dab27e1a60a2451fd075625f0b69e295c5bb018 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_darwin, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 04:33:09 np0005534696 systemd[1]: libpod-conmon-75c2b75b04508207732b9f546dab27e1a60a2451fd075625f0b69e295c5bb018.scope: Deactivated successfully.
Nov 25 04:33:10 np0005534696 podman[77443]: 2025-11-25 09:33:10.393537768 +0000 UTC m=+0.027020124 container create 8dfd717300d78a0c734b5111fb3bd247e6b75ff5448bf342d67de74cffbea49a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tharp, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 04:33:10 np0005534696 systemd[1]: Started libpod-conmon-8dfd717300d78a0c734b5111fb3bd247e6b75ff5448bf342d67de74cffbea49a.scope.
Nov 25 04:33:10 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:10 np0005534696 podman[77443]: 2025-11-25 09:33:10.438557743 +0000 UTC m=+0.072040119 container init 8dfd717300d78a0c734b5111fb3bd247e6b75ff5448bf342d67de74cffbea49a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 04:33:10 np0005534696 podman[77443]: 2025-11-25 09:33:10.442896257 +0000 UTC m=+0.076378613 container start 8dfd717300d78a0c734b5111fb3bd247e6b75ff5448bf342d67de74cffbea49a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 25 04:33:10 np0005534696 podman[77443]: 2025-11-25 09:33:10.444200368 +0000 UTC m=+0.077682724 container attach 8dfd717300d78a0c734b5111fb3bd247e6b75ff5448bf342d67de74cffbea49a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tharp, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:10 np0005534696 relaxed_tharp[77456]: 167 167
Nov 25 04:33:10 np0005534696 systemd[1]: libpod-8dfd717300d78a0c734b5111fb3bd247e6b75ff5448bf342d67de74cffbea49a.scope: Deactivated successfully.
Nov 25 04:33:10 np0005534696 podman[77443]: 2025-11-25 09:33:10.446132306 +0000 UTC m=+0.079614672 container died 8dfd717300d78a0c734b5111fb3bd247e6b75ff5448bf342d67de74cffbea49a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tharp, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 25 04:33:10 np0005534696 systemd[1]: var-lib-containers-storage-overlay-40bc71bdf719c2e6b5e94a8b1469863089074888fc86fbab18e02c7b9bc0d281-merged.mount: Deactivated successfully.
Nov 25 04:33:10 np0005534696 podman[77443]: 2025-11-25 09:33:10.461132292 +0000 UTC m=+0.094614648 container remove 8dfd717300d78a0c734b5111fb3bd247e6b75ff5448bf342d67de74cffbea49a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_tharp, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:33:10 np0005534696 podman[77443]: 2025-11-25 09:33:10.382776147 +0000 UTC m=+0.016258523 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:10 np0005534696 systemd[1]: libpod-conmon-8dfd717300d78a0c734b5111fb3bd247e6b75ff5448bf342d67de74cffbea49a.scope: Deactivated successfully.
Nov 25 04:33:10 np0005534696 podman[77484]: 2025-11-25 09:33:10.633371463 +0000 UTC m=+0.025964898 container create ecb54e450a33ad69155c2bb4383c3207a25ecbebf2cd987d61eb1f6315e21ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 04:33:10 np0005534696 systemd[1]: Started libpod-conmon-ecb54e450a33ad69155c2bb4383c3207a25ecbebf2cd987d61eb1f6315e21ae3.scope.
Nov 25 04:33:10 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:10 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b960077ab526cb30b45f957422942b0ee5ca6aba86e191b9c29d4eb168270d9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:10 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b960077ab526cb30b45f957422942b0ee5ca6aba86e191b9c29d4eb168270d9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:10 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b960077ab526cb30b45f957422942b0ee5ca6aba86e191b9c29d4eb168270d9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:10 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b960077ab526cb30b45f957422942b0ee5ca6aba86e191b9c29d4eb168270d9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:10 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b960077ab526cb30b45f957422942b0ee5ca6aba86e191b9c29d4eb168270d9d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:10 np0005534696 podman[77484]: 2025-11-25 09:33:10.70365965 +0000 UTC m=+0.096253096 container init ecb54e450a33ad69155c2bb4383c3207a25ecbebf2cd987d61eb1f6315e21ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 25 04:33:10 np0005534696 podman[77484]: 2025-11-25 09:33:10.710509425 +0000 UTC m=+0.103102862 container start ecb54e450a33ad69155c2bb4383c3207a25ecbebf2cd987d61eb1f6315e21ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 04:33:10 np0005534696 podman[77484]: 2025-11-25 09:33:10.711579208 +0000 UTC m=+0.104172634 container attach ecb54e450a33ad69155c2bb4383c3207a25ecbebf2cd987d61eb1f6315e21ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:10 np0005534696 podman[77484]: 2025-11-25 09:33:10.623353284 +0000 UTC m=+0.015946740 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e17 _set_new_cache_sizes cache_size:1020053209 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:33:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e18 e18: 3 total, 2 up, 3 in
Nov 25 04:33:10 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2175287734' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 04:33:10 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 25 04:33:10 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/3148501607' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 25 04:33:10 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/3148501607' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 25 04:33:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate-test[77497]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Nov 25 04:33:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate-test[77497]:                            [--no-systemd] [--no-tmpfs]
Nov 25 04:33:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate-test[77497]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 25 04:33:10 np0005534696 systemd[1]: libpod-ecb54e450a33ad69155c2bb4383c3207a25ecbebf2cd987d61eb1f6315e21ae3.scope: Deactivated successfully.
Nov 25 04:33:10 np0005534696 podman[77484]: 2025-11-25 09:33:10.865335852 +0000 UTC m=+0.257929289 container died ecb54e450a33ad69155c2bb4383c3207a25ecbebf2cd987d61eb1f6315e21ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 25 04:33:10 np0005534696 systemd[1]: var-lib-containers-storage-overlay-b960077ab526cb30b45f957422942b0ee5ca6aba86e191b9c29d4eb168270d9d-merged.mount: Deactivated successfully.
Nov 25 04:33:10 np0005534696 podman[77484]: 2025-11-25 09:33:10.886019731 +0000 UTC m=+0.278613167 container remove ecb54e450a33ad69155c2bb4383c3207a25ecbebf2cd987d61eb1f6315e21ae3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:10 np0005534696 systemd[1]: libpod-conmon-ecb54e450a33ad69155c2bb4383c3207a25ecbebf2cd987d61eb1f6315e21ae3.scope: Deactivated successfully.
Nov 25 04:33:11 np0005534696 systemd[1]: Reloading.
Nov 25 04:33:11 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:33:11 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:33:11 np0005534696 systemd[1]: Reloading.
Nov 25 04:33:11 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:33:11 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:33:11 np0005534696 systemd[1]: Starting Ceph osd.2 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:33:11 np0005534696 podman[77646]: 2025-11-25 09:33:11.595384897 +0000 UTC m=+0.029231344 container create c769110e29b2457f8322710af08c1ac3356a2319ded669136085823d42c354c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:11 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:11 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9781979cdbadf93ed1f2aa6291cad6f4ab92b23427f8765a52cc474393134b2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:11 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9781979cdbadf93ed1f2aa6291cad6f4ab92b23427f8765a52cc474393134b2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:11 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9781979cdbadf93ed1f2aa6291cad6f4ab92b23427f8765a52cc474393134b2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:11 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9781979cdbadf93ed1f2aa6291cad6f4ab92b23427f8765a52cc474393134b2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:11 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9781979cdbadf93ed1f2aa6291cad6f4ab92b23427f8765a52cc474393134b2c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:11 np0005534696 podman[77646]: 2025-11-25 09:33:11.654558853 +0000 UTC m=+0.088405321 container init c769110e29b2457f8322710af08c1ac3356a2319ded669136085823d42c354c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:33:11 np0005534696 podman[77646]: 2025-11-25 09:33:11.659000389 +0000 UTC m=+0.092846837 container start c769110e29b2457f8322710af08c1ac3356a2319ded669136085823d42c354c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 04:33:11 np0005534696 podman[77646]: 2025-11-25 09:33:11.660168617 +0000 UTC m=+0.094015065 container attach c769110e29b2457f8322710af08c1ac3356a2319ded669136085823d42c354c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 04:33:11 np0005534696 podman[77646]: 2025-11-25 09:33:11.581798537 +0000 UTC m=+0.015645005 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:11 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e19 e19: 3 total, 2 up, 3 in
Nov 25 04:33:11 np0005534696 ceph-mon[75508]: Deploying daemon osd.2 on compute-2
Nov 25 04:33:11 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2930438515' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 25 04:33:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 04:33:11 np0005534696 bash[77646]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 04:33:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 04:33:11 np0005534696 bash[77646]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 04:33:12 np0005534696 lvm[77740]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 04:33:12 np0005534696 lvm[77740]: VG ceph_vg0 finished
Nov 25 04:33:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 25 04:33:12 np0005534696 bash[77646]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 25 04:33:12 np0005534696 bash[77646]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 04:33:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 04:33:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 04:33:12 np0005534696 bash[77646]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 25 04:33:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 04:33:12 np0005534696 bash[77646]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 04:33:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 25 04:33:12 np0005534696 bash[77646]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 25 04:33:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:12 np0005534696 bash[77646]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:12 np0005534696 bash[77646]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 04:33:12 np0005534696 bash[77646]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 25 04:33:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 04:33:12 np0005534696 bash[77646]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 25 04:33:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate[77658]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 25 04:33:12 np0005534696 bash[77646]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 25 04:33:12 np0005534696 systemd[1]: libpod-c769110e29b2457f8322710af08c1ac3356a2319ded669136085823d42c354c1.scope: Deactivated successfully.
Nov 25 04:33:12 np0005534696 podman[77852]: 2025-11-25 09:33:12.629785046 +0000 UTC m=+0.017864620 container died c769110e29b2457f8322710af08c1ac3356a2319ded669136085823d42c354c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:12 np0005534696 systemd[1]: var-lib-containers-storage-overlay-9781979cdbadf93ed1f2aa6291cad6f4ab92b23427f8765a52cc474393134b2c-merged.mount: Deactivated successfully.
Nov 25 04:33:12 np0005534696 podman[77852]: 2025-11-25 09:33:12.650290822 +0000 UTC m=+0.038370386 container remove c769110e29b2457f8322710af08c1ac3356a2319ded669136085823d42c354c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 04:33:12 np0005534696 podman[77898]: 2025-11-25 09:33:12.786818195 +0000 UTC m=+0.025411763 container create bdbe41bd7d1cfadefe90f7ac8acb7109bfe489b034a2c4e44cdfbfcdc00cc146 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:33:12 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2930438515' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 25 04:33:12 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/1942627046' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 25 04:33:12 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e20 e20: 3 total, 2 up, 3 in
Nov 25 04:33:12 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01fe2105e467442dd42d68db168754ada7dfa670fa3b4f58ff0935fb64afac41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:12 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01fe2105e467442dd42d68db168754ada7dfa670fa3b4f58ff0935fb64afac41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:12 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01fe2105e467442dd42d68db168754ada7dfa670fa3b4f58ff0935fb64afac41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:12 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01fe2105e467442dd42d68db168754ada7dfa670fa3b4f58ff0935fb64afac41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:12 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01fe2105e467442dd42d68db168754ada7dfa670fa3b4f58ff0935fb64afac41/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:12 np0005534696 podman[77898]: 2025-11-25 09:33:12.830227345 +0000 UTC m=+0.068820912 container init bdbe41bd7d1cfadefe90f7ac8acb7109bfe489b034a2c4e44cdfbfcdc00cc146 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 04:33:12 np0005534696 podman[77898]: 2025-11-25 09:33:12.83536774 +0000 UTC m=+0.073961317 container start bdbe41bd7d1cfadefe90f7ac8acb7109bfe489b034a2c4e44cdfbfcdc00cc146 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 04:33:12 np0005534696 bash[77898]: bdbe41bd7d1cfadefe90f7ac8acb7109bfe489b034a2c4e44cdfbfcdc00cc146
Nov 25 04:33:12 np0005534696 podman[77898]: 2025-11-25 09:33:12.776561831 +0000 UTC m=+0.015155409 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:12 np0005534696 systemd[1]: Started Ceph osd.2 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:33:12 np0005534696 ceph-osd[77914]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 04:33:12 np0005534696 ceph-osd[77914]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Nov 25 04:33:12 np0005534696 ceph-osd[77914]: pidfile_write: ignore empty --pid-file
Nov 25 04:33:12 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:12 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:12 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:12 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:12 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:13 np0005534696 podman[78017]: 2025-11-25 09:33:13.241412737 +0000 UTC m=+0.025467256 container create 3308d97ae53b87aa69578e8baaaaa078dfd8fd9b9c97a3f028f17a50bdc17c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_proskuriakova, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 25 04:33:13 np0005534696 systemd[1]: Started libpod-conmon-3308d97ae53b87aa69578e8baaaaa078dfd8fd9b9c97a3f028f17a50bdc17c65.scope.
Nov 25 04:33:13 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:13 np0005534696 podman[78017]: 2025-11-25 09:33:13.299880212 +0000 UTC m=+0.083934740 container init 3308d97ae53b87aa69578e8baaaaa078dfd8fd9b9c97a3f028f17a50bdc17c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 25 04:33:13 np0005534696 podman[78017]: 2025-11-25 09:33:13.304644903 +0000 UTC m=+0.088699412 container start 3308d97ae53b87aa69578e8baaaaa078dfd8fd9b9c97a3f028f17a50bdc17c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_proskuriakova, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:33:13 np0005534696 podman[78017]: 2025-11-25 09:33:13.305770441 +0000 UTC m=+0.089824949 container attach 3308d97ae53b87aa69578e8baaaaa078dfd8fd9b9c97a3f028f17a50bdc17c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_proskuriakova, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:13 np0005534696 eloquent_proskuriakova[78030]: 167 167
Nov 25 04:33:13 np0005534696 podman[78017]: 2025-11-25 09:33:13.308655082 +0000 UTC m=+0.092709590 container died 3308d97ae53b87aa69578e8baaaaa078dfd8fd9b9c97a3f028f17a50bdc17c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:33:13 np0005534696 systemd[1]: libpod-3308d97ae53b87aa69578e8baaaaa078dfd8fd9b9c97a3f028f17a50bdc17c65.scope: Deactivated successfully.
Nov 25 04:33:13 np0005534696 systemd[1]: var-lib-containers-storage-overlay-d53efb228d8c7d952c79491462a20032d45fc2c3a63644dbd5d80316f55a2441-merged.mount: Deactivated successfully.
Nov 25 04:33:13 np0005534696 podman[78017]: 2025-11-25 09:33:13.325247761 +0000 UTC m=+0.109302269 container remove 3308d97ae53b87aa69578e8baaaaa078dfd8fd9b9c97a3f028f17a50bdc17c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_proskuriakova, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:33:13 np0005534696 podman[78017]: 2025-11-25 09:33:13.230179415 +0000 UTC m=+0.014233943 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:13 np0005534696 systemd[1]: libpod-conmon-3308d97ae53b87aa69578e8baaaaa078dfd8fd9b9c97a3f028f17a50bdc17c65.scope: Deactivated successfully.
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:13 np0005534696 podman[78052]: 2025-11-25 09:33:13.437208446 +0000 UTC m=+0.028616863 container create 1a68dd878c84d005389435bf95104513247c66b0b48e09def9c9ca5024e8bb53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_maxwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:33:13 np0005534696 systemd[1]: Started libpod-conmon-1a68dd878c84d005389435bf95104513247c66b0b48e09def9c9ca5024e8bb53.scope.
Nov 25 04:33:13 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187507f24bd0a6771f8708a41e29193ed5d2939c5824a0d2659161edd7a22dd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187507f24bd0a6771f8708a41e29193ed5d2939c5824a0d2659161edd7a22dd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187507f24bd0a6771f8708a41e29193ed5d2939c5824a0d2659161edd7a22dd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187507f24bd0a6771f8708a41e29193ed5d2939c5824a0d2659161edd7a22dd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:13 np0005534696 podman[78052]: 2025-11-25 09:33:13.486567987 +0000 UTC m=+0.077976394 container init 1a68dd878c84d005389435bf95104513247c66b0b48e09def9c9ca5024e8bb53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:13 np0005534696 podman[78052]: 2025-11-25 09:33:13.492146051 +0000 UTC m=+0.083554459 container start 1a68dd878c84d005389435bf95104513247c66b0b48e09def9c9ca5024e8bb53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 04:33:13 np0005534696 podman[78052]: 2025-11-25 09:33:13.494421101 +0000 UTC m=+0.085829509 container attach 1a68dd878c84d005389435bf95104513247c66b0b48e09def9c9ca5024e8bb53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_maxwell, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:13 np0005534696 podman[78052]: 2025-11-25 09:33:13.424397869 +0000 UTC m=+0.015806296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5800 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f8fcb5c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:13 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/1942627046' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 25 04:33:13 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:13 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:13 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/3230090525' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 25 04:33:13 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e21 e21: 3 total, 2 up, 3 in
Nov 25 04:33:13 np0005534696 lvm[78148]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 04:33:13 np0005534696 lvm[78148]: VG ceph_vg0 finished
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 25 04:33:13 np0005534696 wonderful_maxwell[78067]: {}
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: load: jerasure load: lrc 
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 04:33:13 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:13 np0005534696 systemd[1]: libpod-1a68dd878c84d005389435bf95104513247c66b0b48e09def9c9ca5024e8bb53.scope: Deactivated successfully.
Nov 25 04:33:13 np0005534696 podman[78052]: 2025-11-25 09:33:13.973807684 +0000 UTC m=+0.565216091 container died 1a68dd878c84d005389435bf95104513247c66b0b48e09def9c9ca5024e8bb53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_maxwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:13 np0005534696 systemd[1]: var-lib-containers-storage-overlay-187507f24bd0a6771f8708a41e29193ed5d2939c5824a0d2659161edd7a22dd6-merged.mount: Deactivated successfully.
Nov 25 04:33:13 np0005534696 podman[78052]: 2025-11-25 09:33:13.995122905 +0000 UTC m=+0.586531312 container remove 1a68dd878c84d005389435bf95104513247c66b0b48e09def9c9ca5024e8bb53 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_maxwell, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:14 np0005534696 systemd[1]: libpod-conmon-1a68dd878c84d005389435bf95104513247c66b0b48e09def9c9ca5024e8bb53.scope: Deactivated successfully.
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:14 np0005534696 podman[78311]: 2025-11-25 09:33:14.613469286 +0000 UTC m=+0.037645559 container exec 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 25 04:33:14 np0005534696 podman[78311]: 2025-11-25 09:33:14.690850033 +0000 UTC m=+0.115026305 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:14 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:14 np0005534696 ceph-mon[75508]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 04:33:14 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/3230090525' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 25 04:33:14 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:14 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:14 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/146186615' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 25 04:33:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e22 e22: 3 total, 2 up, 3 in
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b34c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b35000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b35000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b35000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b35000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount shared_bdev_used = 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: RocksDB version: 7.9.2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Git sha 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: DB SUMMARY
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: DB Session ID:  MQ60LQPHYFFMSMUUPAEM
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: CURRENT file:  CURRENT
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                         Options.error_if_exists: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.create_if_missing: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                                     Options.env: 0x561f8fd09650
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                                Options.info_log: 0x561f90b397c0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                              Options.statistics: (nil)
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.use_fsync: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                              Options.db_log_dir: 
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.write_buffer_manager: 0x561f90c1ea00
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.unordered_write: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.row_cache: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                              Options.wal_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.two_write_queues: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.wal_compression: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.atomic_flush: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.max_background_jobs: 4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.max_background_compactions: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.max_subcompactions: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.max_open_files: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Compression algorithms supported:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kZSTD supported: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kXpressCompression supported: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kZlibCompression supported: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39ba0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39ba0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39ba0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6cae8901-d07b-45d9-a979-df7615351c7d
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063195091112, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063195091429, "job": 1, "event": "recovery_finished"}
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: freelist init
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: freelist _read_cfg
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs umount
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b35000 /var/lib/ceph/osd/ceph-2/block) close
Nov 25 04:33:15 np0005534696 podman[78656]: 2025-11-25 09:33:15.251932283 +0000 UTC m=+0.027521853 container create 67d2a2dea1f9de066ce9d305f2bffb1faf94a1dde12fb6a53b3a744695521876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_bohr, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:15 np0005534696 systemd[1]: Started libpod-conmon-67d2a2dea1f9de066ce9d305f2bffb1faf94a1dde12fb6a53b3a744695521876.scope.
Nov 25 04:33:15 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:15 np0005534696 podman[78656]: 2025-11-25 09:33:15.306148138 +0000 UTC m=+0.081737708 container init 67d2a2dea1f9de066ce9d305f2bffb1faf94a1dde12fb6a53b3a744695521876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_bohr, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:33:15 np0005534696 podman[78656]: 2025-11-25 09:33:15.310964126 +0000 UTC m=+0.086553695 container start 67d2a2dea1f9de066ce9d305f2bffb1faf94a1dde12fb6a53b3a744695521876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_bohr, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:33:15 np0005534696 podman[78656]: 2025-11-25 09:33:15.311999875 +0000 UTC m=+0.087589444 container attach 67d2a2dea1f9de066ce9d305f2bffb1faf94a1dde12fb6a53b3a744695521876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_bohr, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b35000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b35000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b35000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bdev(0x561f90b35000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluefs mount shared_bdev_used = 4718592
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: RocksDB version: 7.9.2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Git sha 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: DB SUMMARY
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: DB Session ID:  MQ60LQPHYFFMSMUUPAEN
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: CURRENT file:  CURRENT
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: IDENTITY file:  IDENTITY
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                         Options.error_if_exists: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.create_if_missing: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                         Options.paranoid_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                                     Options.env: 0x561f8fd09110
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                                Options.info_log: 0x561f90b39960
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_file_opening_threads: 16
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                              Options.statistics: (nil)
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.use_fsync: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.max_log_file_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                         Options.allow_fallocate: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.use_direct_reads: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.create_missing_column_families: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                              Options.db_log_dir: 
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                                 Options.wal_dir: db.wal
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.advise_random_on_open: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.write_buffer_manager: 0x561f90c1ea00
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                            Options.rate_limiter: (nil)
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.unordered_write: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.row_cache: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                              Options.wal_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.allow_ingest_behind: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.two_write_queues: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.manual_wal_flush: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.wal_compression: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.atomic_flush: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.log_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.allow_data_in_errors: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.db_host_id: __hostname__
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.max_background_jobs: 4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.max_background_compactions: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.max_subcompactions: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.max_open_files: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.bytes_per_sync: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.max_background_flushes: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Compression algorithms supported:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kZSTD supported: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kXpressCompression supported: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kBZip2Compression supported: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kLZ4Compression supported: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kZlibCompression supported: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: #011kSnappyCompression supported: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 25 04:33:15 np0005534696 silly_bohr[78669]: 167 167
Nov 25 04:33:15 np0005534696 podman[78656]: 2025-11-25 09:33:15.320734178 +0000 UTC m=+0.096323748 container died 67d2a2dea1f9de066ce9d305f2bffb1faf94a1dde12fb6a53b3a744695521876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b396a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 systemd[1]: libpod-67d2a2dea1f9de066ce9d305f2bffb1faf94a1dde12fb6a53b3a744695521876.scope: Deactivated successfully.
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b396a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b396a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b396a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b396a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 podman[78656]: 2025-11-25 09:33:15.240872205 +0000 UTC m=+0.016461784 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b396a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b396a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39ae0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 systemd[1]: var-lib-containers-storage-overlay-540f47fe8494f2382d17a36072675abcaef08271b89a65d271e9dd9896cb17de-merged.mount: Deactivated successfully.
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39ae0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:           Options.merge_operator: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.compaction_filter_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.sst_partitioner_factory: None
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561f90b39ae0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561f8fd4a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.write_buffer_size: 16777216
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.max_write_buffer_number: 64
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.compression: LZ4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.num_levels: 7
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.level: 32767
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.compression_opts.strategy: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                  Options.compression_opts.enabled: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.arena_block_size: 1048576
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.disable_auto_compactions: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.inplace_update_support: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.bloom_locality: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                    Options.max_successive_merges: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.paranoid_file_checks: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.force_consistency_checks: 1
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.report_bg_io_stats: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                               Options.ttl: 2592000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                       Options.enable_blob_files: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                           Options.min_blob_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                          Options.blob_file_size: 268435456
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb:                Options.blob_file_starting_level: 0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 25 04:33:15 np0005534696 podman[78656]: 2025-11-25 09:33:15.346465829 +0000 UTC m=+0.122055399 container remove 67d2a2dea1f9de066ce9d305f2bffb1faf94a1dde12fb6a53b3a744695521876 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6cae8901-d07b-45d9-a979-df7615351c7d
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063195370579, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063195373251, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063195, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6cae8901-d07b-45d9-a979-df7615351c7d", "db_session_id": "MQ60LQPHYFFMSMUUPAEN", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:33:15 np0005534696 systemd[1]: libpod-conmon-67d2a2dea1f9de066ce9d305f2bffb1faf94a1dde12fb6a53b3a744695521876.scope: Deactivated successfully.
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063195376059, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063195, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6cae8901-d07b-45d9-a979-df7615351c7d", "db_session_id": "MQ60LQPHYFFMSMUUPAEN", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063195379154, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063195, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6cae8901-d07b-45d9-a979-df7615351c7d", "db_session_id": "MQ60LQPHYFFMSMUUPAEN", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063195380106, "job": 1, "event": "recovery_finished"}
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561f90e86000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: DB pointer 0x561f90e66000
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: _get_class not permitted to load lua
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: _get_class not permitted to load sdk
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: osd.2 0 load_pgs
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: osd.2 0 load_pgs opened 0 pgs
Nov 25 04:33:15 np0005534696 ceph-osd[77914]: osd.2 0 log_to_monitors true
Nov 25 04:33:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2[77910]: 2025-11-25T09:33:15.399+0000 7faca9d19740 -1 osd.2 0 log_to_monitors true
Nov 25 04:33:15 np0005534696 podman[78906]: 2025-11-25 09:33:15.462372132 +0000 UTC m=+0.027993957 container create 6ebab6b17e5fc7a5b8b57fa049d1f9402f327149bcad1f6c36761bccd7edcab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wiles, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:33:15 np0005534696 systemd[1]: Started libpod-conmon-6ebab6b17e5fc7a5b8b57fa049d1f9402f327149bcad1f6c36761bccd7edcab4.scope.
Nov 25 04:33:15 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:15 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad44ddaca56bb5f4676921071b5d01d059e4ec737f2959b2e76dadf7124ebb0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:15 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad44ddaca56bb5f4676921071b5d01d059e4ec737f2959b2e76dadf7124ebb0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:15 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad44ddaca56bb5f4676921071b5d01d059e4ec737f2959b2e76dadf7124ebb0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:15 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad44ddaca56bb5f4676921071b5d01d059e4ec737f2959b2e76dadf7124ebb0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:15 np0005534696 podman[78906]: 2025-11-25 09:33:15.516566966 +0000 UTC m=+0.082188791 container init 6ebab6b17e5fc7a5b8b57fa049d1f9402f327149bcad1f6c36761bccd7edcab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wiles, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 25 04:33:15 np0005534696 podman[78906]: 2025-11-25 09:33:15.521440732 +0000 UTC m=+0.087062557 container start 6ebab6b17e5fc7a5b8b57fa049d1f9402f327149bcad1f6c36761bccd7edcab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 25 04:33:15 np0005534696 podman[78906]: 2025-11-25 09:33:15.522883784 +0000 UTC m=+0.088505608 container attach 6ebab6b17e5fc7a5b8b57fa049d1f9402f327149bcad1f6c36761bccd7edcab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 04:33:15 np0005534696 podman[78906]: 2025-11-25 09:33:15.451357138 +0000 UTC m=+0.016978983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e22 _set_new_cache_sizes cache_size:1020054712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:33:15 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:15 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:15 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:15 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:15 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/146186615' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 25 04:33:15 np0005534696 ceph-mon[75508]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 25 04:33:15 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/370321277' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 25 04:33:15 np0005534696 confident_wiles[78919]: [
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:    {
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:        "available": false,
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:        "being_replaced": false,
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:        "ceph_device_lvm": false,
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:        "lsm_data": {},
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:        "lvs": [],
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:        "path": "/dev/sr0",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:        "rejected_reasons": [
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "Insufficient space (<5GB)",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "Has a FileSystem"
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:        ],
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:        "sys_api": {
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "actuators": null,
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "device_nodes": [
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:                "sr0"
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            ],
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "devname": "sr0",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "human_readable_size": "474.00 KB",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "id_bus": "ata",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "model": "QEMU DVD-ROM",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "nr_requests": "64",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "parent": "/dev/sr0",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "partitions": {},
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "path": "/dev/sr0",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "removable": "1",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "rev": "2.5+",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "ro": "0",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "rotational": "1",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "sas_address": "",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "sas_device_handle": "",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "scheduler_mode": "mq-deadline",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "sectors": 0,
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "sectorsize": "2048",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "size": 485376.0,
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "support_discard": "2048",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "type": "disk",
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:            "vendor": "QEMU"
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:        }
Nov 25 04:33:15 np0005534696 confident_wiles[78919]:    }
Nov 25 04:33:15 np0005534696 confident_wiles[78919]: ]
Nov 25 04:33:15 np0005534696 systemd[1]: libpod-6ebab6b17e5fc7a5b8b57fa049d1f9402f327149bcad1f6c36761bccd7edcab4.scope: Deactivated successfully.
Nov 25 04:33:15 np0005534696 conmon[78919]: conmon 6ebab6b17e5fc7a5b8b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ebab6b17e5fc7a5b8b57fa049d1f9402f327149bcad1f6c36761bccd7edcab4.scope/container/memory.events
Nov 25 04:33:15 np0005534696 podman[78906]: 2025-11-25 09:33:15.964016241 +0000 UTC m=+0.529638065 container died 6ebab6b17e5fc7a5b8b57fa049d1f9402f327149bcad1f6c36761bccd7edcab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wiles, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:15 np0005534696 systemd[1]: var-lib-containers-storage-overlay-ad44ddaca56bb5f4676921071b5d01d059e4ec737f2959b2e76dadf7124ebb0f-merged.mount: Deactivated successfully.
Nov 25 04:33:15 np0005534696 podman[78906]: 2025-11-25 09:33:15.984760162 +0000 UTC m=+0.550381986 container remove 6ebab6b17e5fc7a5b8b57fa049d1f9402f327149bcad1f6c36761bccd7edcab4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_wiles, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 25 04:33:15 np0005534696 systemd[1]: libpod-conmon-6ebab6b17e5fc7a5b8b57fa049d1f9402f327149bcad1f6c36761bccd7edcab4.scope: Deactivated successfully.
Nov 25 04:33:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e23 e23: 3 total, 2 up, 3 in
Nov 25 04:33:16 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 25 04:33:16 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: Adjusting osd_memory_target on compute-2 to 128.7M
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: Unable to set osd_memory_target on compute-2 to 134971801: error parsing value: Value '134971801' is below minimum 939524096
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/370321277' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/1194115357' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:33:17 np0005534696 ceph-osd[77914]: osd.2 0 done with init, starting boot process
Nov 25 04:33:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e24 e24: 3 total, 2 up, 3 in
Nov 25 04:33:17 np0005534696 ceph-osd[77914]: osd.2 0 start_boot
Nov 25 04:33:17 np0005534696 ceph-osd[77914]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 25 04:33:17 np0005534696 ceph-osd[77914]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 25 04:33:17 np0005534696 ceph-osd[77914]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 25 04:33:17 np0005534696 ceph-osd[77914]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 25 04:33:17 np0005534696 ceph-osd[77914]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 25 04:33:18 np0005534696 ceph-mon[75508]: from='osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 25 04:33:18 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/1194115357' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 25 04:33:18 np0005534696 ceph-osd[77914]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 91.632 iops: 23457.771 elapsed_sec: 0.128
Nov 25 04:33:18 np0005534696 ceph-osd[77914]: log_channel(cluster) log [WRN] : OSD bench result of 23457.770996 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 04:33:18 np0005534696 ceph-osd[77914]: osd.2 0 waiting for initial osdmap
Nov 25 04:33:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2[77910]: 2025-11-25T09:33:18.649+0000 7faca64af640 -1 osd.2 0 waiting for initial osdmap
Nov 25 04:33:18 np0005534696 ceph-osd[77914]: osd.2 24 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 25 04:33:18 np0005534696 ceph-osd[77914]: osd.2 24 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 25 04:33:18 np0005534696 ceph-osd[77914]: osd.2 24 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 25 04:33:18 np0005534696 ceph-osd[77914]: osd.2 24 check_osdmap_features require_osd_release unknown -> squid
Nov 25 04:33:18 np0005534696 ceph-osd[77914]: osd.2 24 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 04:33:18 np0005534696 ceph-osd[77914]: osd.2 24 set_numa_affinity not setting numa affinity
Nov 25 04:33:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-osd-2[77910]: 2025-11-25T09:33:18.664+0000 7faca12c4640 -1 osd.2 24 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 25 04:33:18 np0005534696 ceph-osd[77914]: osd.2 24 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Nov 25 04:33:19 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e25 e25: 3 total, 3 up, 3 in
Nov 25 04:33:19 np0005534696 ceph-osd[77914]: osd.2 25 state: booting -> active
Nov 25 04:33:19 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 25 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [2] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:33:19 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 25 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [2] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: OSD bench result of 23457.770996 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: Cluster is now healthy
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: osd.2 [v2:192.168.122.102:6800/1797544192,v1:192.168.122.102:6801/1797544192] boot
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2793311854' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2793311854' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e26 e26: 3 total, 3 up, 3 in
Nov 25 04:33:20 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 26 pg[5.0( empty local-lis/les=25/26 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [2] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:33:20 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 26 pg[3.0( empty local-lis/les=25/26 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=25) [2] r=0 lpr=25 pi=[14,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:33:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:33:21 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:21 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:21 np0005534696 ceph-mon[75508]: Reconfiguring mgr.compute-0.zcfgby (monmap changed)...
Nov 25 04:33:21 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.zcfgby", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 04:33:21 np0005534696 ceph-mon[75508]: Reconfiguring daemon mgr.compute-0.zcfgby on compute-0
Nov 25 04:33:21 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/736770130' entity='client.admin' 
Nov 25 04:33:21 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:21 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:21 np0005534696 ceph-mon[75508]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 25 04:33:21 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 04:33:21 np0005534696 ceph-mon[75508]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: Saving service ingress.rgw.default spec with placement count:2
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: Reconfiguring osd.1 (monmap changed)...
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: Reconfiguring daemon osd.1 on compute-0
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 25 04:33:22 np0005534696 ceph-mon[75508]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: Reconfiguring osd.0 (monmap changed)...
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: Reconfiguring daemon osd.0 on compute-1
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: Saving service node-exporter spec with placement *
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: Saving service grafana spec with placement compute-0;count:1
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: Saving service prometheus spec with placement compute-0;count:1
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: Saving service alertmanager spec with placement compute-0;count:1
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:23 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 25 04:33:23 np0005534696 podman[80436]: 2025-11-25 09:33:23.310446924 +0000 UTC m=+0.025930924 container create f17adc0116f9b496aba0f93cd7c998c26c021a9d497d218495f0b03a47f9203e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 25 04:33:23 np0005534696 systemd[1]: Started libpod-conmon-f17adc0116f9b496aba0f93cd7c998c26c021a9d497d218495f0b03a47f9203e.scope.
Nov 25 04:33:23 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:23 np0005534696 podman[80436]: 2025-11-25 09:33:23.359930617 +0000 UTC m=+0.075414638 container init f17adc0116f9b496aba0f93cd7c998c26c021a9d497d218495f0b03a47f9203e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 04:33:23 np0005534696 podman[80436]: 2025-11-25 09:33:23.36412431 +0000 UTC m=+0.079608300 container start f17adc0116f9b496aba0f93cd7c998c26c021a9d497d218495f0b03a47f9203e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:23 np0005534696 podman[80436]: 2025-11-25 09:33:23.365332813 +0000 UTC m=+0.080816813 container attach f17adc0116f9b496aba0f93cd7c998c26c021a9d497d218495f0b03a47f9203e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 25 04:33:23 np0005534696 hopeful_franklin[80449]: 167 167
Nov 25 04:33:23 np0005534696 podman[80436]: 2025-11-25 09:33:23.367516982 +0000 UTC m=+0.083000983 container died f17adc0116f9b496aba0f93cd7c998c26c021a9d497d218495f0b03a47f9203e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 04:33:23 np0005534696 systemd[1]: libpod-f17adc0116f9b496aba0f93cd7c998c26c021a9d497d218495f0b03a47f9203e.scope: Deactivated successfully.
Nov 25 04:33:23 np0005534696 systemd[1]: var-lib-containers-storage-overlay-1d1892ea4dd9fefb660431895ef51eca48003bf81a886386ad44cb8239bd1971-merged.mount: Deactivated successfully.
Nov 25 04:33:23 np0005534696 podman[80436]: 2025-11-25 09:33:23.389668218 +0000 UTC m=+0.105152218 container remove f17adc0116f9b496aba0f93cd7c998c26c021a9d497d218495f0b03a47f9203e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_franklin, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 25 04:33:23 np0005534696 podman[80436]: 2025-11-25 09:33:23.29986444 +0000 UTC m=+0.015348460 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:23 np0005534696 systemd[1]: libpod-conmon-f17adc0116f9b496aba0f93cd7c998c26c021a9d497d218495f0b03a47f9203e.scope: Deactivated successfully.
Nov 25 04:33:23 np0005534696 podman[80527]: 2025-11-25 09:33:23.74971531 +0000 UTC m=+0.030225735 container create 910a489b84071778bb3dead22d593a530fa77faa7dd64bbf4cc7d560a049b412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:23 np0005534696 systemd[1]: Started libpod-conmon-910a489b84071778bb3dead22d593a530fa77faa7dd64bbf4cc7d560a049b412.scope.
Nov 25 04:33:23 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:23 np0005534696 podman[80527]: 2025-11-25 09:33:23.823497479 +0000 UTC m=+0.104007915 container init 910a489b84071778bb3dead22d593a530fa77faa7dd64bbf4cc7d560a049b412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_antonelli, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:33:23 np0005534696 podman[80527]: 2025-11-25 09:33:23.828283311 +0000 UTC m=+0.108793726 container start 910a489b84071778bb3dead22d593a530fa77faa7dd64bbf4cc7d560a049b412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_antonelli, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 04:33:23 np0005534696 epic_antonelli[80540]: 167 167
Nov 25 04:33:23 np0005534696 systemd[1]: libpod-910a489b84071778bb3dead22d593a530fa77faa7dd64bbf4cc7d560a049b412.scope: Deactivated successfully.
Nov 25 04:33:23 np0005534696 podman[80527]: 2025-11-25 09:33:23.736837949 +0000 UTC m=+0.017348384 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:23 np0005534696 podman[80527]: 2025-11-25 09:33:23.830659751 +0000 UTC m=+0.111170166 container attach 910a489b84071778bb3dead22d593a530fa77faa7dd64bbf4cc7d560a049b412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 04:33:23 np0005534696 podman[80527]: 2025-11-25 09:33:23.83444614 +0000 UTC m=+0.114956586 container died 910a489b84071778bb3dead22d593a530fa77faa7dd64bbf4cc7d560a049b412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_antonelli, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:33:23 np0005534696 systemd[1]: var-lib-containers-storage-overlay-4dfc0f4abd301bfc5a032f2a664a2c816a08c17d60f9a971a8bfb510a4360a0f-merged.mount: Deactivated successfully.
Nov 25 04:33:23 np0005534696 podman[80527]: 2025-11-25 09:33:23.851314034 +0000 UTC m=+0.131824449 container remove 910a489b84071778bb3dead22d593a530fa77faa7dd64bbf4cc7d560a049b412 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_antonelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:33:23 np0005534696 systemd[1]: libpod-conmon-910a489b84071778bb3dead22d593a530fa77faa7dd64bbf4cc7d560a049b412.scope: Deactivated successfully.
Nov 25 04:33:24 np0005534696 podman[80663]: 2025-11-25 09:33:24.406902698 +0000 UTC m=+0.045670034 container exec 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 25 04:33:24 np0005534696 ceph-mon[75508]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 25 04:33:24 np0005534696 ceph-mon[75508]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 25 04:33:24 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:24 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:24 np0005534696 ceph-mon[75508]: Reconfiguring mgr.compute-2.flybft (monmap changed)...
Nov 25 04:33:24 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.flybft", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 25 04:33:24 np0005534696 ceph-mon[75508]: Reconfiguring daemon mgr.compute-2.flybft on compute-2
Nov 25 04:33:24 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/3510657336' entity='client.admin' 
Nov 25 04:33:24 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:24 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:24 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2358325226' entity='client.admin' 
Nov 25 04:33:24 np0005534696 podman[80663]: 2025-11-25 09:33:24.482303448 +0000 UTC m=+0.121070774 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 04:33:25 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:25 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:25 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:25 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:25 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:33:25 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:25 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:33:25 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/3373333714' entity='client.admin' 
Nov 25 04:33:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:33:27 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/3012149904' entity='client.admin' 
Nov 25 04:33:27 np0005534696 python3[80758]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:33:28 np0005534696 podman[80852]: 2025-11-25 09:33:28.184367693 +0000 UTC m=+0.028854869 container create 75e6d84198f93e78134046adb918a939480f7e8ac30f32b5a3809a97163dc358 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_johnson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:33:28 np0005534696 systemd[1]: Started libpod-conmon-75e6d84198f93e78134046adb918a939480f7e8ac30f32b5a3809a97163dc358.scope.
Nov 25 04:33:28 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:33:28 np0005534696 podman[80852]: 2025-11-25 09:33:28.234215674 +0000 UTC m=+0.078702879 container init 75e6d84198f93e78134046adb918a939480f7e8ac30f32b5a3809a97163dc358 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:28 np0005534696 podman[80852]: 2025-11-25 09:33:28.238612195 +0000 UTC m=+0.083099380 container start 75e6d84198f93e78134046adb918a939480f7e8ac30f32b5a3809a97163dc358 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_johnson, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:33:28 np0005534696 podman[80852]: 2025-11-25 09:33:28.239918219 +0000 UTC m=+0.084405424 container attach 75e6d84198f93e78134046adb918a939480f7e8ac30f32b5a3809a97163dc358 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:33:28 np0005534696 fervent_johnson[80865]: 167 167
Nov 25 04:33:28 np0005534696 systemd[1]: libpod-75e6d84198f93e78134046adb918a939480f7e8ac30f32b5a3809a97163dc358.scope: Deactivated successfully.
Nov 25 04:33:28 np0005534696 podman[80852]: 2025-11-25 09:33:28.241848465 +0000 UTC m=+0.086335650 container died 75e6d84198f93e78134046adb918a939480f7e8ac30f32b5a3809a97163dc358 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:33:28 np0005534696 systemd[1]: var-lib-containers-storage-overlay-3284691aac05d1ca7c663f244b50a9a35f1d3b5fc81b48ed9c10a4ed5473d649-merged.mount: Deactivated successfully.
Nov 25 04:33:28 np0005534696 podman[80852]: 2025-11-25 09:33:28.261279806 +0000 UTC m=+0.105766991 container remove 75e6d84198f93e78134046adb918a939480f7e8ac30f32b5a3809a97163dc358 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_johnson, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 25 04:33:28 np0005534696 podman[80852]: 2025-11-25 09:33:28.173662469 +0000 UTC m=+0.018149674 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:28 np0005534696 systemd[1]: libpod-conmon-75e6d84198f93e78134046adb918a939480f7e8ac30f32b5a3809a97163dc358.scope: Deactivated successfully.
Nov 25 04:33:28 np0005534696 systemd[1]: Reloading.
Nov 25 04:33:28 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2675069756' entity='client.admin' 
Nov 25 04:33:28 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:28 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:28 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oidoiv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 04:33:28 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.oidoiv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 04:33:28 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:28 np0005534696 ceph-mon[75508]: Deploying daemon rgw.rgw.compute-2.oidoiv on compute-2
Nov 25 04:33:28 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:33:28 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:33:28 np0005534696 systemd[1]: Reloading.
Nov 25 04:33:28 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:33:28 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:33:28 np0005534696 systemd[1]: Starting Ceph rgw.rgw.compute-2.oidoiv for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:33:28 np0005534696 podman[80995]: 2025-11-25 09:33:28.8396111 +0000 UTC m=+0.026232641 container create 52e88d37431671c03c9a54bf443ea81dcd9c5c531cbe90d91d144d91f8275564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-2-oidoiv, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:33:28 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0ae057492e6d285d1283f48460c870e77ef3f5be77cb4a528823d5ece47aad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:28 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0ae057492e6d285d1283f48460c870e77ef3f5be77cb4a528823d5ece47aad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:28 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0ae057492e6d285d1283f48460c870e77ef3f5be77cb4a528823d5ece47aad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:28 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad0ae057492e6d285d1283f48460c870e77ef3f5be77cb4a528823d5ece47aad/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-2.oidoiv supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:28 np0005534696 podman[80995]: 2025-11-25 09:33:28.885392301 +0000 UTC m=+0.072013841 container init 52e88d37431671c03c9a54bf443ea81dcd9c5c531cbe90d91d144d91f8275564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-2-oidoiv, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:33:28 np0005534696 podman[80995]: 2025-11-25 09:33:28.88939463 +0000 UTC m=+0.076016171 container start 52e88d37431671c03c9a54bf443ea81dcd9c5c531cbe90d91d144d91f8275564 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-2-oidoiv, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 25 04:33:28 np0005534696 bash[80995]: 52e88d37431671c03c9a54bf443ea81dcd9c5c531cbe90d91d144d91f8275564
Nov 25 04:33:28 np0005534696 podman[80995]: 2025-11-25 09:33:28.828365721 +0000 UTC m=+0.014987272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:33:28 np0005534696 systemd[1]: Started Ceph rgw.rgw.compute-2.oidoiv for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:33:28 np0005534696 radosgw[81011]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 25 04:33:28 np0005534696 radosgw[81011]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Nov 25 04:33:28 np0005534696 radosgw[81011]: framework: beast
Nov 25 04:33:28 np0005534696 radosgw[81011]: framework conf key: endpoint, val: 192.168.122.102:8082
Nov 25 04:33:28 np0005534696 radosgw[81011]: init_numa not setting numa affinity
Nov 25 04:33:29 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/1272850759' entity='client.admin' 
Nov 25 04:33:29 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:29 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:29 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:29 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.lyczeh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 04:33:29 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.lyczeh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 04:33:29 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:29 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/4100665242' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 25 04:33:29 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e27 e27: 3 total, 3 up, 3 in
Nov 25 04:33:29 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Nov 25 04:33:29 np0005534696 ceph-mon[75508]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/90661545' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: Deploying daemon rgw.rgw.compute-1.lyczeh on compute-1
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.102:0/90661545' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/4100665242' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.uosdwi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.uosdwi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:33:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e28 e28: 3 total, 3 up, 3 in
Nov 25 04:33:31 np0005534696 ceph-mon[75508]: Deploying daemon rgw.rgw.compute-0.uosdwi on compute-0
Nov 25 04:33:31 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/1685845904' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 25 04:33:31 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 25 04:33:31 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:31 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:31 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:31 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:31 np0005534696 ceph-mon[75508]: from='mgr.14122 192.168.122.100:0/2272455046' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  1: '-n'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  2: 'mgr.compute-2.flybft'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  3: '-f'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  4: '--setuser'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  5: 'ceph'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  6: '--setgroup'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  7: 'ceph'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  8: '--default-log-to-file=false'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  9: '--default-log-to-journald=true'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr respawn  exe_path /proc/self/exe
Nov 25 04:33:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: ignoring --setuser ceph since I am not root
Nov 25 04:33:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: ignoring --setgroup ceph since I am not root
Nov 25 04:33:31 np0005534696 systemd[1]: session-24.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd[1]: session-23.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd[1]: session-22.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd[1]: session-27.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 24 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 systemd[1]: session-31.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd[1]: session-31.scope: Consumed 47.727s CPU time.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 23 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 22 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: pidfile_write: ignore empty --pid-file
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 31 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 27 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 systemd[1]: session-19.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 19 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 systemd[1]: session-25.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd[1]: session-28.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 25 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 28 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 systemd[1]: session-29.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 29 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 systemd[1]: session-26.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd[1]: session-21.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 26 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 21 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 systemd[1]: session-30.scope: Deactivated successfully.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Session 30 logged out. Waiting for processes to exit.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 24.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 23.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 22.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 27.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 31.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 19.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 25.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 28.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 29.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 26.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 21.
Nov 25 04:33:31 np0005534696 systemd-logind[744]: Removed session 30.
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'alerts'
Nov 25 04:33:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:31.618+0000 7feeb1906140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'balancer'
Nov 25 04:33:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:31.688+0000 7feeb1906140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 04:33:31 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'cephadm'
Nov 25 04:33:31 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e29 e29: 3 total, 3 up, 3 in
Nov 25 04:33:31 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 25 04:33:31 np0005534696 ceph-mon[75508]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 04:33:32 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'crash'
Nov 25 04:33:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:32.352+0000 7feeb1906140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 04:33:32 np0005534696 ceph-mgr[75792]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 04:33:32 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'dashboard'
Nov 25 04:33:32 np0005534696 ceph-mon[75508]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 04:33:32 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/1685845904' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 25 04:33:32 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 04:33:32 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 04:33:32 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 04:33:32 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 04:33:32 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 25 04:33:32 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'devicehealth'
Nov 25 04:33:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:32.891+0000 7feeb1906140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 04:33:32 np0005534696 ceph-mgr[75792]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 04:33:32 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 04:33:32 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e30 e30: 3 total, 3 up, 3 in
Nov 25 04:33:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 04:33:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 04:33:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]:  from numpy import show_config as show_numpy_config
Nov 25 04:33:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:33.033+0000 7feeb1906140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 04:33:33 np0005534696 ceph-mgr[75792]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 04:33:33 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'influx'
Nov 25 04:33:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:33.094+0000 7feeb1906140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 04:33:33 np0005534696 ceph-mgr[75792]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 04:33:33 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'insights'
Nov 25 04:33:33 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'iostat'
Nov 25 04:33:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:33.211+0000 7feeb1906140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 04:33:33 np0005534696 ceph-mgr[75792]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 04:33:33 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'k8sevents'
Nov 25 04:33:33 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'localpool'
Nov 25 04:33:33 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 04:33:33 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'mirroring'
Nov 25 04:33:33 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'nfs'
Nov 25 04:33:33 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e31 e31: 3 total, 3 up, 3 in
Nov 25 04:33:33 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 25 04:33:33 np0005534696 ceph-mon[75508]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 04:33:33 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 04:33:33 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 04:33:33 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 25 04:33:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:34.060+0000 7feeb1906140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'orchestrator'
Nov 25 04:33:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:34.247+0000 7feeb1906140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 04:33:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:34.312+0000 7feeb1906140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'osd_support'
Nov 25 04:33:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:34.370+0000 7feeb1906140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 04:33:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:34.438+0000 7feeb1906140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'progress'
Nov 25 04:33:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:34.500+0000 7feeb1906140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'prometheus'
Nov 25 04:33:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:34.796+0000 7feeb1906140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rbd_support'
Nov 25 04:33:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:34.880+0000 7feeb1906140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 04:33:34 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'restful'
Nov 25 04:33:34 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e32 e32: 3 total, 3 up, 3 in
Nov 25 04:33:34 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 04:33:34 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 04:33:34 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 04:33:34 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 04:33:34 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 25 04:33:34 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 04:33:34 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 04:33:34 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 25 04:33:35 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rgw'
Nov 25 04:33:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:35.254+0000 7feeb1906140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 04:33:35 np0005534696 ceph-mgr[75792]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 04:33:35 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rook'
Nov 25 04:33:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:35.731+0000 7feeb1906140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 04:33:35 np0005534696 ceph-mgr[75792]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 04:33:35 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'selftest'
Nov 25 04:33:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:33:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:35.793+0000 7feeb1906140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 04:33:35 np0005534696 ceph-mgr[75792]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 04:33:35 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'snap_schedule'
Nov 25 04:33:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:35.869+0000 7feeb1906140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 04:33:35 np0005534696 ceph-mgr[75792]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 04:33:35 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'stats'
Nov 25 04:33:35 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'status'
Nov 25 04:33:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e33 e33: 3 total, 3 up, 3 in
Nov 25 04:33:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 25 04:33:35 np0005534696 ceph-mon[75508]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:35.999+0000 7feeb1906140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'telegraf'
Nov 25 04:33:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:36.061+0000 7feeb1906140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'telemetry'
Nov 25 04:33:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:36.195+0000 7feeb1906140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 04:33:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:36.387+0000 7feeb1906140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'volumes'
Nov 25 04:33:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:36.615+0000 7feeb1906140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'zabbix'
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e34 e34: 3 total, 3 up, 3 in
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:36.676+0000 7feeb1906140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: mgr load Constructed class from module: dashboard
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: [dashboard INFO root] Starting engine...
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: ms_deliver_dispatch: unhandled message 0x55b2e4ad5860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Nov 25 04:33:36 np0005534696 ceph-mgr[75792]: [dashboard INFO root] Engine started...
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: Active manager daemon compute-0.zcfgby restarted
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: Activating manager daemon compute-0.zcfgby
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.101:0/1293368742' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.102:0/1045634058' entity='client.rgw.rgw.compute-2.oidoiv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: Manager daemon compute-0.zcfgby is now available
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 04:33:36 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 04:33:37 np0005534696 systemd-logind[744]: New session 32 of user ceph-admin.
Nov 25 04:33:37 np0005534696 systemd[1]: Started Session 32 of User ceph-admin.
Nov 25 04:33:37 np0005534696 podman[81753]: 2025-11-25 09:33:37.556717205 +0000 UTC m=+0.037321296 container exec 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:33:37 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e35 e35: 3 total, 3 up, 3 in
Nov 25 04:33:37 np0005534696 podman[81753]: 2025-11-25 09:33:37.65566453 +0000 UTC m=+0.136268621 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 04:33:37 np0005534696 radosgw[81011]: v1 topic migration: starting v1 topic migration..
Nov 25 04:33:37 np0005534696 radosgw[81011]: LDAP not started since no server URIs were provided in the configuration.
Nov 25 04:33:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-rgw-rgw-compute-2-oidoiv[81007]: 2025-11-25T09:33:37.783+0000 7fe47acc1980 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 25 04:33:37 np0005534696 radosgw[81011]: v1 topic migration: finished v1 topic migration
Nov 25 04:33:37 np0005534696 radosgw[81011]: framework: beast
Nov 25 04:33:37 np0005534696 radosgw[81011]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 25 04:33:37 np0005534696 radosgw[81011]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 25 04:33:37 np0005534696 radosgw[81011]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 25 04:33:37 np0005534696 radosgw[81011]: starting handler: beast
Nov 25 04:33:37 np0005534696 radosgw[81011]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 04:33:37 np0005534696 radosgw[81011]: mgrc service_daemon_register rgw.24163 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.102:8082,frontend_type#0=beast,hostname=compute-2,id=rgw.compute-2.oidoiv,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865372,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=6af48147-6aba-44e3-91a3-565a32433f82,zone_name=default,zonegroup_id=7f877101-a613-42fa-9374-f143e99606e2,zonegroup_name=default}
Nov 25 04:33:37 np0005534696 radosgw[81011]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-1.lyczeh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/370176697' entity='client.rgw.rgw.compute-0.uosdwi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: from='client.? ' entity='client.rgw.rgw.compute-2.oidoiv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:33:38] ENGINE Bus STARTING
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:33:38] ENGINE Serving on http://192.168.122.100:8765
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:33:38] ENGINE Serving on https://192.168.122.100:7150
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:33:38] ENGINE Bus STARTED
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:33:38] ENGINE Client ('192.168.122.100', 57652) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 04:33:38 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: Cluster is now healthy
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:33:39 np0005534696 ceph-mon[75508]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:40 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  1: '-n'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  2: 'mgr.compute-2.flybft'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  3: '-f'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  4: '--setuser'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  5: 'ceph'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  6: '--setgroup'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  7: 'ceph'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  8: '--default-log-to-file=false'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  9: '--default-log-to-journald=true'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr respawn  exe_path /proc/self/exe
Nov 25 04:33:41 np0005534696 systemd[1]: session-32.scope: Deactivated successfully.
Nov 25 04:33:41 np0005534696 systemd[1]: session-32.scope: Consumed 3.148s CPU time.
Nov 25 04:33:41 np0005534696 systemd-logind[744]: Session 32 logged out. Waiting for processes to exit.
Nov 25 04:33:41 np0005534696 systemd-logind[744]: Removed session 32.
Nov 25 04:33:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: ignoring --setuser ceph since I am not root
Nov 25 04:33:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: ignoring --setgroup ceph since I am not root
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: pidfile_write: ignore empty --pid-file
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'alerts'
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'balancer'
Nov 25 04:33:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:41.671+0000 7f6a90726140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 04:33:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'cephadm'
Nov 25 04:33:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:41.741+0000 7f6a90726140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 04:33:41 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:41 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:41 np0005534696 ceph-mon[75508]: from='mgr.14430 192.168.122.100:0/1752214448' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:41 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/321985415' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 25 04:33:41 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/321985415' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 25 04:33:42 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'crash'
Nov 25 04:33:42 np0005534696 ceph-mgr[75792]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 04:33:42 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'dashboard'
Nov 25 04:33:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:42.404+0000 7f6a90726140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 04:33:42 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2461625104' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 25 04:33:42 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'devicehealth'
Nov 25 04:33:42 np0005534696 ceph-mgr[75792]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 04:33:42 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 04:33:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:42.954+0000 7f6a90726140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 04:33:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 04:33:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 04:33:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]:  from numpy import show_config as show_numpy_config
Nov 25 04:33:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:43.095+0000 7f6a90726140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 04:33:43 np0005534696 ceph-mgr[75792]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 04:33:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'influx'
Nov 25 04:33:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:43.157+0000 7f6a90726140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 04:33:43 np0005534696 ceph-mgr[75792]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 04:33:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'insights'
Nov 25 04:33:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'iostat'
Nov 25 04:33:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:43.275+0000 7f6a90726140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 04:33:43 np0005534696 ceph-mgr[75792]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 04:33:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'k8sevents'
Nov 25 04:33:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'localpool'
Nov 25 04:33:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 04:33:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'mirroring'
Nov 25 04:33:43 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/2461625104' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 25 04:33:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'nfs'
Nov 25 04:33:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:44.125+0000 7f6a90726140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'orchestrator'
Nov 25 04:33:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:44.310+0000 7f6a90726140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 04:33:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:44.376+0000 7f6a90726140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'osd_support'
Nov 25 04:33:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:44.434+0000 7f6a90726140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 04:33:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:44.501+0000 7f6a90726140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'progress'
Nov 25 04:33:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:44.563+0000 7f6a90726140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'prometheus'
Nov 25 04:33:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:44.858+0000 7f6a90726140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rbd_support'
Nov 25 04:33:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:44.942+0000 7f6a90726140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 04:33:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'restful'
Nov 25 04:33:45 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rgw'
Nov 25 04:33:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:45.315+0000 7f6a90726140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 04:33:45 np0005534696 ceph-mgr[75792]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 04:33:45 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rook'
Nov 25 04:33:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:33:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:45.793+0000 7f6a90726140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 04:33:45 np0005534696 ceph-mgr[75792]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 04:33:45 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'selftest'
Nov 25 04:33:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:45.855+0000 7f6a90726140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 04:33:45 np0005534696 ceph-mgr[75792]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 04:33:45 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'snap_schedule'
Nov 25 04:33:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:45.925+0000 7f6a90726140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 04:33:45 np0005534696 ceph-mgr[75792]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 04:33:45 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'stats'
Nov 25 04:33:45 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'status'
Nov 25 04:33:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:46.053+0000 7f6a90726140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'telegraf'
Nov 25 04:33:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:46.114+0000 7f6a90726140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'telemetry'
Nov 25 04:33:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:46.246+0000 7f6a90726140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 04:33:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:46.433+0000 7f6a90726140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'volumes'
Nov 25 04:33:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:46.659+0000 7f6a90726140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'zabbix'
Nov 25 04:33:46 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e36 e36: 3 total, 3 up, 3 in
Nov 25 04:33:46 np0005534696 ceph-mon[75508]: Active manager daemon compute-0.zcfgby restarted
Nov 25 04:33:46 np0005534696 ceph-mon[75508]: Activating manager daemon compute-0.zcfgby
Nov 25 04:33:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:46.721+0000 7f6a90726140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: ms_deliver_dispatch: unhandled message 0x55f297efd860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  1: '-n'
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  2: 'mgr.compute-2.flybft'
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  3: '-f'
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  4: '--setuser'
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  5: 'ceph'
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  6: '--setgroup'
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  7: 'ceph'
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  8: '--default-log-to-file=false'
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  9: '--default-log-to-journald=true'
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 25 04:33:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: ignoring --setuser ceph since I am not root
Nov 25 04:33:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: ignoring --setgroup ceph since I am not root
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: pidfile_write: ignore empty --pid-file
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'alerts'
Nov 25 04:33:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:46.898+0000 7ffbd4613140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'balancer'
Nov 25 04:33:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:46.968+0000 7ffbd4613140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 04:33:46 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'cephadm'
Nov 25 04:33:47 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'crash'
Nov 25 04:33:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:47.627+0000 7ffbd4613140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 04:33:47 np0005534696 ceph-mgr[75792]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 04:33:47 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'dashboard'
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'devicehealth'
Nov 25 04:33:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:48.172+0000 7ffbd4613140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 04:33:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 04:33:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 04:33:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]:  from numpy import show_config as show_numpy_config
Nov 25 04:33:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:48.315+0000 7ffbd4613140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'influx'
Nov 25 04:33:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:48.377+0000 7ffbd4613140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'insights'
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'iostat'
Nov 25 04:33:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:48.496+0000 7ffbd4613140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'k8sevents'
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'localpool'
Nov 25 04:33:48 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'mirroring'
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'nfs'
Nov 25 04:33:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:49.345+0000 7ffbd4613140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'orchestrator'
Nov 25 04:33:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:49.532+0000 7ffbd4613140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 04:33:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:49.598+0000 7ffbd4613140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'osd_support'
Nov 25 04:33:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:49.656+0000 7ffbd4613140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 04:33:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:49.723+0000 7ffbd4613140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'progress'
Nov 25 04:33:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:49.785+0000 7ffbd4613140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 04:33:49 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'prometheus'
Nov 25 04:33:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:50.081+0000 7ffbd4613140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 04:33:50 np0005534696 ceph-mgr[75792]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 04:33:50 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rbd_support'
Nov 25 04:33:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:50.165+0000 7ffbd4613140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 04:33:50 np0005534696 ceph-mgr[75792]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 04:33:50 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'restful'
Nov 25 04:33:50 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rgw'
Nov 25 04:33:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:50.534+0000 7ffbd4613140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 04:33:50 np0005534696 ceph-mgr[75792]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 04:33:50 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rook'
Nov 25 04:33:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:33:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:51.014+0000 7ffbd4613140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'selftest'
Nov 25 04:33:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:51.076+0000 7ffbd4613140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'snap_schedule'
Nov 25 04:33:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:51.146+0000 7ffbd4613140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'stats'
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'status'
Nov 25 04:33:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:51.274+0000 7ffbd4613140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'telegraf'
Nov 25 04:33:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:51.335+0000 7ffbd4613140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'telemetry'
Nov 25 04:33:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:51.468+0000 7ffbd4613140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 04:33:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:51.657+0000 7ffbd4613140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'volumes'
Nov 25 04:33:51 np0005534696 systemd[1]: Stopping User Manager for UID 42477...
Nov 25 04:33:51 np0005534696 systemd[72488]: Activating special unit Exit the Session...
Nov 25 04:33:51 np0005534696 systemd[72488]: Stopped target Main User Target.
Nov 25 04:33:51 np0005534696 systemd[72488]: Stopped target Basic System.
Nov 25 04:33:51 np0005534696 systemd[72488]: Stopped target Paths.
Nov 25 04:33:51 np0005534696 systemd[72488]: Stopped target Sockets.
Nov 25 04:33:51 np0005534696 systemd[72488]: Stopped target Timers.
Nov 25 04:33:51 np0005534696 systemd[72488]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 25 04:33:51 np0005534696 systemd[72488]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 04:33:51 np0005534696 systemd[72488]: Closed D-Bus User Message Bus Socket.
Nov 25 04:33:51 np0005534696 systemd[72488]: Stopped Create User's Volatile Files and Directories.
Nov 25 04:33:51 np0005534696 systemd[72488]: Removed slice User Application Slice.
Nov 25 04:33:51 np0005534696 systemd[72488]: Reached target Shutdown.
Nov 25 04:33:51 np0005534696 systemd[72488]: Finished Exit the Session.
Nov 25 04:33:51 np0005534696 systemd[72488]: Reached target Exit the Session.
Nov 25 04:33:51 np0005534696 systemd[1]: user@42477.service: Deactivated successfully.
Nov 25 04:33:51 np0005534696 systemd[1]: Stopped User Manager for UID 42477.
Nov 25 04:33:51 np0005534696 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 25 04:33:51 np0005534696 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 25 04:33:51 np0005534696 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 25 04:33:51 np0005534696 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 25 04:33:51 np0005534696 systemd[1]: Removed slice User Slice of UID 42477.
Nov 25 04:33:51 np0005534696 systemd[1]: user-42477.slice: Consumed 51.662s CPU time.
Nov 25 04:33:51 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e37 e37: 3 total, 3 up, 3 in
Nov 25 04:33:51 np0005534696 ceph-mon[75508]: Active manager daemon compute-0.zcfgby restarted
Nov 25 04:33:51 np0005534696 ceph-mon[75508]: Activating manager daemon compute-0.zcfgby
Nov 25 04:33:51 np0005534696 ceph-mon[75508]: Manager daemon compute-0.zcfgby is now available
Nov 25 04:33:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:51.899+0000 7ffbd4613140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'zabbix'
Nov 25 04:33:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:33:51.962+0000 7ffbd4613140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: mgr load Constructed class from module: dashboard
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: ms_deliver_dispatch: unhandled message 0x55eefd7ab860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 25 04:33:51 np0005534696 ceph-mgr[75792]: [dashboard INFO root] Starting engine...
Nov 25 04:33:52 np0005534696 ceph-mgr[75792]: [dashboard INFO root] Engine started...
Nov 25 04:33:52 np0005534696 systemd-logind[744]: New session 33 of user ceph-admin.
Nov 25 04:33:52 np0005534696 systemd[1]: Created slice User Slice of UID 42477.
Nov 25 04:33:52 np0005534696 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 25 04:33:52 np0005534696 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 25 04:33:52 np0005534696 systemd[1]: Starting User Manager for UID 42477...
Nov 25 04:33:52 np0005534696 systemd[82992]: Queued start job for default target Main User Target.
Nov 25 04:33:52 np0005534696 systemd[82992]: Created slice User Application Slice.
Nov 25 04:33:52 np0005534696 systemd[82992]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 04:33:52 np0005534696 systemd[82992]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 04:33:52 np0005534696 systemd[82992]: Reached target Paths.
Nov 25 04:33:52 np0005534696 systemd[82992]: Reached target Timers.
Nov 25 04:33:52 np0005534696 systemd[82992]: Starting D-Bus User Message Bus Socket...
Nov 25 04:33:52 np0005534696 systemd[82992]: Starting Create User's Volatile Files and Directories...
Nov 25 04:33:52 np0005534696 systemd[82992]: Finished Create User's Volatile Files and Directories.
Nov 25 04:33:52 np0005534696 systemd[82992]: Listening on D-Bus User Message Bus Socket.
Nov 25 04:33:52 np0005534696 systemd[82992]: Reached target Sockets.
Nov 25 04:33:52 np0005534696 systemd[82992]: Reached target Basic System.
Nov 25 04:33:52 np0005534696 systemd[1]: Started User Manager for UID 42477.
Nov 25 04:33:52 np0005534696 systemd[82992]: Reached target Main User Target.
Nov 25 04:33:52 np0005534696 systemd[82992]: Startup finished in 84ms.
Nov 25 04:33:52 np0005534696 systemd[1]: Started Session 33 of User ceph-admin.
Nov 25 04:33:52 np0005534696 podman[83113]: 2025-11-25 09:33:52.76116055 +0000 UTC m=+0.039537730 container exec 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 25 04:33:52 np0005534696 podman[83113]: 2025-11-25 09:33:52.840541841 +0000 UTC m=+0.118919011 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 25 04:33:52 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e2 new map
Nov 25 04:33:52 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e2 print_map#012e2#012btime 2025-11-25T09:33:52:871701+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T09:33:52.871685+0000#012modified#0112025-11-25T09:33:52.871685+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Nov 25 04:33:52 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e38 e38: 3 total, 3 up, 3 in
Nov 25 04:33:52 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 04:33:52 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 04:33:52 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 25 04:33:52 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 25 04:33:52 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 25 04:33:52 np0005534696 ceph-mon[75508]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 25 04:33:52 np0005534696 ceph-mon[75508]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 25 04:33:52 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:33:53] ENGINE Bus STARTING
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:33:53] ENGINE Serving on https://192.168.122.100:7150
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:33:53] ENGINE Client ('192.168.122.100', 59432) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:33:53] ENGINE Serving on http://192.168.122.100:8765
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:33:53] ENGINE Bus STARTED
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 04:33:53 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 04:33:54 np0005534696 ceph-mon[75508]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 04:33:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e39 e39: 3 total, 3 up, 3 in
Nov 25 04:33:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:56 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e40 e40: 3 total, 3 up, 3 in
Nov 25 04:33:56 np0005534696 systemd[1]: Reloading.
Nov 25 04:33:56 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:33:56 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:33:56 np0005534696 systemd[1]: Reloading.
Nov 25 04:33:56 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:33:56 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:33:56 np0005534696 systemd[1]: Starting Ceph node-exporter.compute-2 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:33:56 np0005534696 bash[84428]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Nov 25 04:33:57 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e41 e41: 3 total, 3 up, 3 in
Nov 25 04:33:57 np0005534696 ceph-mon[75508]: Deploying daemon node-exporter.compute-2 on compute-2
Nov 25 04:33:57 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 25 04:33:57 np0005534696 ceph-mon[75508]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 04:33:57 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:57 np0005534696 ceph-mon[75508]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 25 04:33:57 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:57 np0005534696 ceph-mon[75508]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 25 04:33:57 np0005534696 bash[84428]: Getting image source signatures
Nov 25 04:33:57 np0005534696 bash[84428]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Nov 25 04:33:57 np0005534696 bash[84428]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Nov 25 04:33:57 np0005534696 bash[84428]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Nov 25 04:33:57 np0005534696 bash[84428]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Nov 25 04:33:57 np0005534696 bash[84428]: Writing manifest to image destination
Nov 25 04:33:57 np0005534696 podman[84428]: 2025-11-25 09:33:57.887216768 +0000 UTC m=+1.047393242 container create 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:33:57 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291ec78e463b2bf806826da075accb3b0e926e13292b59d3bd2b77dcf1c4b2a3/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 25 04:33:57 np0005534696 podman[84428]: 2025-11-25 09:33:57.925516339 +0000 UTC m=+1.085692823 container init 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:33:57 np0005534696 podman[84428]: 2025-11-25 09:33:57.929126612 +0000 UTC m=+1.089303076 container start 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:33:57 np0005534696 bash[84428]: 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67
Nov 25 04:33:57 np0005534696 podman[84428]: 2025-11-25 09:33:57.877428124 +0000 UTC m=+1.037604598 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.933Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.933Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.934Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 25 04:33:57 np0005534696 systemd[1]: Started Ceph node-exporter.compute-2 for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.934Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.934Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.934Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=arp
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=bcache
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=bonding
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=cpu
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=dmi
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=edac
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=entropy
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=filefd
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.935Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=netclass
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=netdev
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=netstat
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=nfs
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=nvme
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=os
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=pressure
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=rapl
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=selinux
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=softnet
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=stat
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=textfile
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=time
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=uname
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=xfs
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.936Z caller=node_exporter.go:117 level=info collector=zfs
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.938Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 25 04:33:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2[84491]: ts=2025-11-25T09:33:57.938Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 25 04:33:58 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/3756297363' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 25 04:33:58 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/3756297363' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 25 04:33:58 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:58 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:58 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:58 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:33:58 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:33:58 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 25 04:33:58 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644062899' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 25 04:33:59 np0005534696 ceph-mon[75508]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 25 04:34:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:01 np0005534696 ceph-mon[75508]: from='client.? 192.168.122.100:0/1817509438' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 25 04:34:01 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:01 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:01 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pwazzx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 04:34:01 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pwazzx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 04:34:01 np0005534696 podman[84584]: 2025-11-25 09:34:01.494896459 +0000 UTC m=+0.027713843 container create 7a06443d211e8ba6189dc59bcab1c70ef1fe93ac72679a4691a146d24b4a8b63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_goldstine, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 04:34:01 np0005534696 systemd[1]: Started libpod-conmon-7a06443d211e8ba6189dc59bcab1c70ef1fe93ac72679a4691a146d24b4a8b63.scope.
Nov 25 04:34:01 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:34:01 np0005534696 podman[84584]: 2025-11-25 09:34:01.55470755 +0000 UTC m=+0.087524935 container init 7a06443d211e8ba6189dc59bcab1c70ef1fe93ac72679a4691a146d24b4a8b63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_goldstine, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 04:34:01 np0005534696 podman[84584]: 2025-11-25 09:34:01.559206253 +0000 UTC m=+0.092023637 container start 7a06443d211e8ba6189dc59bcab1c70ef1fe93ac72679a4691a146d24b4a8b63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 25 04:34:01 np0005534696 podman[84584]: 2025-11-25 09:34:01.561359068 +0000 UTC m=+0.094176462 container attach 7a06443d211e8ba6189dc59bcab1c70ef1fe93ac72679a4691a146d24b4a8b63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_goldstine, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:34:01 np0005534696 recursing_goldstine[84598]: 167 167
Nov 25 04:34:01 np0005534696 systemd[1]: libpod-7a06443d211e8ba6189dc59bcab1c70ef1fe93ac72679a4691a146d24b4a8b63.scope: Deactivated successfully.
Nov 25 04:34:01 np0005534696 conmon[84598]: conmon 7a06443d211e8ba6189d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a06443d211e8ba6189dc59bcab1c70ef1fe93ac72679a4691a146d24b4a8b63.scope/container/memory.events
Nov 25 04:34:01 np0005534696 podman[84584]: 2025-11-25 09:34:01.563532322 +0000 UTC m=+0.096349716 container died 7a06443d211e8ba6189dc59bcab1c70ef1fe93ac72679a4691a146d24b4a8b63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_goldstine, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:34:01 np0005534696 systemd[1]: var-lib-containers-storage-overlay-d545862a2173179a5c70789ccae04098e84386b1b5abc33256e9b16b924cc898-merged.mount: Deactivated successfully.
Nov 25 04:34:01 np0005534696 podman[84584]: 2025-11-25 09:34:01.483797405 +0000 UTC m=+0.016614810 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:34:01 np0005534696 podman[84584]: 2025-11-25 09:34:01.581382461 +0000 UTC m=+0.114199845 container remove 7a06443d211e8ba6189dc59bcab1c70ef1fe93ac72679a4691a146d24b4a8b63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_goldstine, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:34:01 np0005534696 systemd[1]: libpod-conmon-7a06443d211e8ba6189dc59bcab1c70ef1fe93ac72679a4691a146d24b4a8b63.scope: Deactivated successfully.
Nov 25 04:34:01 np0005534696 systemd[1]: Reloading.
Nov 25 04:34:01 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:34:01 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:34:01 np0005534696 systemd[1]: Reloading.
Nov 25 04:34:01 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:34:01 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:34:02 np0005534696 systemd[1]: Starting Ceph mds.cephfs.compute-2.pwazzx for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:34:02 np0005534696 podman[84728]: 2025-11-25 09:34:02.180904872 +0000 UTC m=+0.025199317 container create a94f38e75ebce9989e8e35143e703827d844f1287665dcc44e3bebc788162b4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-2-pwazzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:34:02 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f12cda7dacd294942b8d83c378d4fbd9ae6996620372bce3110612d4a97b3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:02 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f12cda7dacd294942b8d83c378d4fbd9ae6996620372bce3110612d4a97b3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:02 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f12cda7dacd294942b8d83c378d4fbd9ae6996620372bce3110612d4a97b3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:02 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91f12cda7dacd294942b8d83c378d4fbd9ae6996620372bce3110612d4a97b3e/merged/var/lib/ceph/mds/ceph-cephfs.compute-2.pwazzx supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:02 np0005534696 podman[84728]: 2025-11-25 09:34:02.222585026 +0000 UTC m=+0.066879492 container init a94f38e75ebce9989e8e35143e703827d844f1287665dcc44e3bebc788162b4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-2-pwazzx, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 25 04:34:02 np0005534696 podman[84728]: 2025-11-25 09:34:02.226225236 +0000 UTC m=+0.070519682 container start a94f38e75ebce9989e8e35143e703827d844f1287665dcc44e3bebc788162b4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-2-pwazzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 04:34:02 np0005534696 bash[84728]: a94f38e75ebce9989e8e35143e703827d844f1287665dcc44e3bebc788162b4f
Nov 25 04:34:02 np0005534696 podman[84728]: 2025-11-25 09:34:02.169853397 +0000 UTC m=+0.014147863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:34:02 np0005534696 systemd[1]: Started Ceph mds.cephfs.compute-2.pwazzx for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: set uid:gid to 167:167 (ceph:ceph)
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: main not setting numa affinity
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: pidfile_write: ignore empty --pid-file
Nov 25 04:34:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-2-pwazzx[84740]: starting mds.cephfs.compute-2.pwazzx at 
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx Updating MDS map to version 2 from mon.0
Nov 25 04:34:02 np0005534696 ceph-mon[75508]: Deploying daemon mds.cephfs.compute-2.pwazzx on compute-2
Nov 25 04:34:02 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx Updating MDS map to version 3 from mon.0
Nov 25 04:34:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e3 new map
Nov 25 04:34:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e3 print_map#012e3#012btime 2025-11-25T09:34:02:633817+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T09:33:52.871685+0000#012modified#0112025-11-25T09:34:02.633809+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14601}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.pwazzx{0:14601} state up:creating seq 1 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.3 handle_mds_map I am now mds.0.3
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x1
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x100
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x600
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x601
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x602
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x603
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x604
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x605
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x606
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x607
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x608
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.cache creating system inode with ino:0x609
Nov 25 04:34:02 np0005534696 ceph-mds[84744]: mds.0.3 creating_done
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjveyw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjveyw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: Deploying daemon mds.cephfs.compute-0.wjveyw on compute-0
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: daemon mds.cephfs.compute-2.pwazzx assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: Cluster is now healthy
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: daemon mds.cephfs.compute-2.pwazzx is now active in filesystem cephfs as rank 0
Nov 25 04:34:03 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx Updating MDS map to version 4 from mon.0
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e4 new map
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e4 print_map#012e4#012btime 2025-11-25T09:34:03:638492+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T09:33:52.871685+0000#012modified#0112025-11-25T09:34:03.638490+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14601}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14601 members: 14601#012[mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 2 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 04:34:03 np0005534696 ceph-mds[84744]: mds.0.3 handle_mds_map I am now mds.0.3
Nov 25 04:34:03 np0005534696 ceph-mds[84744]: mds.0.3 handle_mds_map state change up:creating --> up:active
Nov 25 04:34:03 np0005534696 ceph-mds[84744]: mds.0.3 recovery_done -- successful recovery!
Nov 25 04:34:03 np0005534696 ceph-mds[84744]: mds.0.3 active_start
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e5 new map
Nov 25 04:34:03 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e5 print_map#012e5#012btime 2025-11-25T09:34:03:644218+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T09:33:52.871685+0000#012modified#0112025-11-25T09:34:03.638490+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14601}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14601 members: 14601#012[mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 2 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 04:34:04 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:04 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:04 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:04 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.knpqas", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 25 04:34:04 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.knpqas", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 25 04:34:04 np0005534696 ceph-mon[75508]: Deploying daemon mds.cephfs.compute-1.knpqas on compute-1
Nov 25 04:34:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e6 new map
Nov 25 04:34:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e6 print_map#012e6#012btime 2025-11-25T09:34:05:420267+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T09:33:52.871685+0000#012modified#0112025-11-25T09:34:03.638490+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14601}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14601 members: 14601#012[mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 2 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.knpqas{-1:24293} state up:standby seq 1 addr [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 04:34:05 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:05 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:05 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:05 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:05 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:05 np0005534696 ceph-mon[75508]: Deploying daemon alertmanager.compute-0 on compute-0
Nov 25 04:34:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:07 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx Updating MDS map to version 7 from mon.0
Nov 25 04:34:07 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e7 new map
Nov 25 04:34:07 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e7 print_map#012e7#012btime 2025-11-25T09:34:07:562567+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T09:33:52.871685+0000#012modified#0112025-11-25T09:34:06.658104+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14601}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14601 members: 14601#012[mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.knpqas{-1:24293} state up:standby seq 1 addr [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 04:34:07 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:07 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:07 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:07 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:07 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:07 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:07 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:07 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 25 04:34:07 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:07 np0005534696 ceph-mds[84744]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 25 04:34:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mds-cephfs-compute-2-pwazzx[84740]: 2025-11-25T09:34:07.648+0000 7fd8d4b0c640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Nov 25 04:34:08 np0005534696 ceph-mon[75508]: Regenerating cephadm self-signed grafana TLS certificates
Nov 25 04:34:08 np0005534696 ceph-mon[75508]: Deploying daemon grafana.compute-0 on compute-0
Nov 25 04:34:08 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e8 new map
Nov 25 04:34:08 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).mds e8 print_map#012e8#012btime 2025-11-25T09:34:08:572718+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-25T09:33:52.871685+0000#012modified#0112025-11-25T09:34:06.658104+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14601}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14601 members: 14601#012[mds.cephfs.compute-2.pwazzx{0:14601} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/152534687,v1:192.168.122.102:6805/152534687] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.knpqas{-1:24293} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1211782045,v1:192.168.122.101:6805/1211782045] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-0.wjveyw{-1:24295} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1124998105,v1:192.168.122.100:6807/1124998105] compat {c=[1],r=[1],i=[1fff]}]
Nov 25 04:34:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:12 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:15 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:15 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:15 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:15 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:15 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:15 np0005534696 ceph-mon[75508]: Deploying daemon haproxy.rgw.default.compute-0.jgcdmc on compute-0
Nov 25 04:34:17 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:22 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:22 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:22 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:22 np0005534696 ceph-mon[75508]: Deploying daemon haproxy.rgw.default.compute-2.jrahab on compute-2
Nov 25 04:34:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.002000007s ======
Nov 25 04:34:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:22.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000007s
Nov 25 04:34:23 np0005534696 podman[84861]: 2025-11-25 09:34:23.603480691 +0000 UTC m=+2.149666697 container create bc2c7e7ab3f265f9cb4f68e03dc68a017ba8b468034e36686db3ad5f8f7ae4ce (image=quay.io/ceph/haproxy:2.3, name=interesting_golick)
Nov 25 04:34:23 np0005534696 systemd[1]: Started libpod-conmon-bc2c7e7ab3f265f9cb4f68e03dc68a017ba8b468034e36686db3ad5f8f7ae4ce.scope.
Nov 25 04:34:23 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:34:23 np0005534696 podman[84861]: 2025-11-25 09:34:23.656400644 +0000 UTC m=+2.202586650 container init bc2c7e7ab3f265f9cb4f68e03dc68a017ba8b468034e36686db3ad5f8f7ae4ce (image=quay.io/ceph/haproxy:2.3, name=interesting_golick)
Nov 25 04:34:23 np0005534696 podman[84861]: 2025-11-25 09:34:23.660951907 +0000 UTC m=+2.207137912 container start bc2c7e7ab3f265f9cb4f68e03dc68a017ba8b468034e36686db3ad5f8f7ae4ce (image=quay.io/ceph/haproxy:2.3, name=interesting_golick)
Nov 25 04:34:23 np0005534696 podman[84861]: 2025-11-25 09:34:23.662105263 +0000 UTC m=+2.208291269 container attach bc2c7e7ab3f265f9cb4f68e03dc68a017ba8b468034e36686db3ad5f8f7ae4ce (image=quay.io/ceph/haproxy:2.3, name=interesting_golick)
Nov 25 04:34:23 np0005534696 interesting_golick[84958]: 0 0
Nov 25 04:34:23 np0005534696 systemd[1]: libpod-bc2c7e7ab3f265f9cb4f68e03dc68a017ba8b468034e36686db3ad5f8f7ae4ce.scope: Deactivated successfully.
Nov 25 04:34:23 np0005534696 conmon[84958]: conmon bc2c7e7ab3f265f9cb4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc2c7e7ab3f265f9cb4f68e03dc68a017ba8b468034e36686db3ad5f8f7ae4ce.scope/container/memory.events
Nov 25 04:34:23 np0005534696 podman[84861]: 2025-11-25 09:34:23.593670859 +0000 UTC m=+2.139856885 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 25 04:34:23 np0005534696 podman[84963]: 2025-11-25 09:34:23.69696931 +0000 UTC m=+0.018484051 container died bc2c7e7ab3f265f9cb4f68e03dc68a017ba8b468034e36686db3ad5f8f7ae4ce (image=quay.io/ceph/haproxy:2.3, name=interesting_golick)
Nov 25 04:34:23 np0005534696 systemd[1]: var-lib-containers-storage-overlay-b98494ed561b0e6a23709f1d8ffd00b16e68c52f722fc267ce0098056f5fd1b7-merged.mount: Deactivated successfully.
Nov 25 04:34:23 np0005534696 podman[84963]: 2025-11-25 09:34:23.713787641 +0000 UTC m=+0.035302371 container remove bc2c7e7ab3f265f9cb4f68e03dc68a017ba8b468034e36686db3ad5f8f7ae4ce (image=quay.io/ceph/haproxy:2.3, name=interesting_golick)
Nov 25 04:34:23 np0005534696 systemd[1]: libpod-conmon-bc2c7e7ab3f265f9cb4f68e03dc68a017ba8b468034e36686db3ad5f8f7ae4ce.scope: Deactivated successfully.
Nov 25 04:34:23 np0005534696 systemd[1]: Reloading.
Nov 25 04:34:23 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:34:23 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:34:23 np0005534696 systemd[1]: Reloading.
Nov 25 04:34:24 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:34:24 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:34:24 np0005534696 systemd[1]: Starting Ceph haproxy.rgw.default.compute-2.jrahab for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:34:24 np0005534696 podman[85097]: 2025-11-25 09:34:24.340542731 +0000 UTC m=+0.033963255 container create 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:34:24 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c0e12991f74de570193b934f19492f92ea4cf75c1e1e4aa0e6b1d49b3e92cf/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:24 np0005534696 podman[85097]: 2025-11-25 09:34:24.381374792 +0000 UTC m=+0.074795336 container init 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:34:24 np0005534696 podman[85097]: 2025-11-25 09:34:24.385793634 +0000 UTC m=+0.079214159 container start 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:34:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:24.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:24 np0005534696 bash[85097]: 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5
Nov 25 04:34:24 np0005534696 podman[85097]: 2025-11-25 09:34:24.328358337 +0000 UTC m=+0.021778881 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 25 04:34:24 np0005534696 systemd[1]: Started Ceph haproxy.rgw.default.compute-2.jrahab for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:34:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab[85109]: [NOTICE] 328/093424 (2) : New worker #1 (4) forked
Nov 25 04:34:25 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:25 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:25 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:25 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:25 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 04:34:25 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 04:34:25 np0005534696 ceph-mon[75508]: Deploying daemon keepalived.rgw.default.compute-2.aswfow on compute-2
Nov 25 04:34:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000004s ======
Nov 25 04:34:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:25.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000004s
Nov 25 04:34:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:27.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:28 np0005534696 podman[85204]: 2025-11-25 09:34:28.968452195 +0000 UTC m=+4.179502349 container create f63932f5b6c2d848136b5230b1088686d55b11ccf73d7747ef76edcb03b02860 (image=quay.io/ceph/keepalived:2.2.4, name=gallant_northcutt, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, architecture=x86_64, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 25 04:34:28 np0005534696 systemd[1]: Started libpod-conmon-f63932f5b6c2d848136b5230b1088686d55b11ccf73d7747ef76edcb03b02860.scope.
Nov 25 04:34:29 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:34:29 np0005534696 podman[85204]: 2025-11-25 09:34:28.958882574 +0000 UTC m=+4.169932748 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 25 04:34:29 np0005534696 podman[85204]: 2025-11-25 09:34:29.019093677 +0000 UTC m=+4.230143850 container init f63932f5b6c2d848136b5230b1088686d55b11ccf73d7747ef76edcb03b02860 (image=quay.io/ceph/keepalived:2.2.4, name=gallant_northcutt, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, architecture=x86_64, io.openshift.tags=Ceph keepalived, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph)
Nov 25 04:34:29 np0005534696 podman[85204]: 2025-11-25 09:34:29.024430925 +0000 UTC m=+4.235481089 container start f63932f5b6c2d848136b5230b1088686d55b11ccf73d7747ef76edcb03b02860 (image=quay.io/ceph/keepalived:2.2.4, name=gallant_northcutt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.openshift.expose-services=, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vcs-type=git)
Nov 25 04:34:29 np0005534696 podman[85204]: 2025-11-25 09:34:29.025958185 +0000 UTC m=+4.237008339 container attach f63932f5b6c2d848136b5230b1088686d55b11ccf73d7747ef76edcb03b02860 (image=quay.io/ceph/keepalived:2.2.4, name=gallant_northcutt, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.component=keepalived-container, version=2.2.4, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, distribution-scope=public)
Nov 25 04:34:29 np0005534696 gallant_northcutt[85284]: 0 0
Nov 25 04:34:29 np0005534696 systemd[1]: libpod-f63932f5b6c2d848136b5230b1088686d55b11ccf73d7747ef76edcb03b02860.scope: Deactivated successfully.
Nov 25 04:34:29 np0005534696 conmon[85284]: conmon f63932f5b6c2d848136b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f63932f5b6c2d848136b5230b1088686d55b11ccf73d7747ef76edcb03b02860.scope/container/memory.events
Nov 25 04:34:29 np0005534696 podman[85204]: 2025-11-25 09:34:29.029285376 +0000 UTC m=+4.240335540 container died f63932f5b6c2d848136b5230b1088686d55b11ccf73d7747ef76edcb03b02860 (image=quay.io/ceph/keepalived:2.2.4, name=gallant_northcutt, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public)
Nov 25 04:34:29 np0005534696 systemd[1]: var-lib-containers-storage-overlay-a81314af4a567af0a335dd96da49e6280e37d9bd763259210e025fedf63738f4-merged.mount: Deactivated successfully.
Nov 25 04:34:29 np0005534696 podman[85204]: 2025-11-25 09:34:29.045378374 +0000 UTC m=+4.256428528 container remove f63932f5b6c2d848136b5230b1088686d55b11ccf73d7747ef76edcb03b02860 (image=quay.io/ceph/keepalived:2.2.4, name=gallant_northcutt, vendor=Red Hat, Inc., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vcs-type=git)
Nov 25 04:34:29 np0005534696 systemd[1]: libpod-conmon-f63932f5b6c2d848136b5230b1088686d55b11ccf73d7747ef76edcb03b02860.scope: Deactivated successfully.
Nov 25 04:34:29 np0005534696 systemd[1]: Reloading.
Nov 25 04:34:29 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:34:29 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:34:29 np0005534696 systemd[1]: Reloading.
Nov 25 04:34:29 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:34:29 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:34:29 np0005534696 systemd[1]: Starting Ceph keepalived.rgw.default.compute-2.aswfow for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:34:29 np0005534696 podman[85420]: 2025-11-25 09:34:29.646291869 +0000 UTC m=+0.027522955 container create 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, architecture=x86_64, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, release=1793, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=)
Nov 25 04:34:29 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d70deb7429ff3d4e01770938e8937c0926464525077d0867b015ef9cfbe4930/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:29 np0005534696 podman[85420]: 2025-11-25 09:34:29.684489449 +0000 UTC m=+0.065720554 container init 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, version=2.2.4, distribution-scope=public, release=1793, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 25 04:34:29 np0005534696 podman[85420]: 2025-11-25 09:34:29.688071911 +0000 UTC m=+0.069303006 container start 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, vendor=Red Hat, Inc., name=keepalived, version=2.2.4, com.redhat.component=keepalived-container, architecture=x86_64, io.buildah.version=1.28.2, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vcs-type=git, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 04:34:29 np0005534696 bash[85420]: 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8
Nov 25 04:34:29 np0005534696 podman[85420]: 2025-11-25 09:34:29.634763468 +0000 UTC m=+0.015994573 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 25 04:34:29 np0005534696 systemd[1]: Started Ceph keepalived.rgw.default.compute-2.aswfow for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:34:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:29 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 25 04:34:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:29 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 25 04:34:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:29 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 25 04:34:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:29 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 25 04:34:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:29 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 25 04:34:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:29 2025: Starting VRRP child process, pid=4
Nov 25 04:34:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:29 2025: Startup complete
Nov 25 04:34:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:29 2025: (VI_0) Entering BACKUP STATE (init)
Nov 25 04:34:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:29 2025: VRRP_Script(check_backend) succeeded
Nov 25 04:34:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:29.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000003s ======
Nov 25 04:34:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:30.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000003s
Nov 25 04:34:30 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:30 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:30 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:31.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:31 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 04:34:31 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 04:34:31 np0005534696 ceph-mon[75508]: Deploying daemon keepalived.rgw.default.compute-0.ulmpfs on compute-0
Nov 25 04:34:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:32.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:33 2025: (VI_0) Entering MASTER STATE
Nov 25 04:34:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999998s ======
Nov 25 04:34:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:33.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999998s
Nov 25 04:34:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:34.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:34 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:34 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:34 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:34 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:35 np0005534696 ceph-mon[75508]: Deploying daemon prometheus.compute-0 on compute-0
Nov 25 04:34:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:35.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:36.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:37 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Nov 25 04:34:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:34:37 2025: (VI_0) Entering BACKUP STATE
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:34:37.651285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277651390, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 5491, "num_deletes": 257, "total_data_size": 15987367, "memory_usage": 16834664, "flush_reason": "Manual Compaction"}
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277671149, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 10053654, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 5496, "table_properties": {"data_size": 10033629, "index_size": 12487, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6469, "raw_key_size": 62159, "raw_average_key_size": 24, "raw_value_size": 9983568, "raw_average_value_size": 3881, "num_data_blocks": 554, "num_entries": 2572, "num_filter_entries": 2572, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 1764063170, "file_creation_time": 1764063277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 19899 microseconds, and 12600 cpu microseconds.
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:34:37.671194) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 10053654 bytes OK
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:34:37.671216) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:34:37.671611) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:34:37.671623) EVENT_LOG_v1 {"time_micros": 1764063277671620, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:34:37.671652) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 15958189, prev total WAL file size 15958189, number of live WAL files 2.
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:34:37.673739) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(9818KB) 8(1648B)]
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277673833, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 10055302, "oldest_snapshot_seqno": -1}
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 2318 keys, 10049839 bytes, temperature: kUnknown
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277690926, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 10049839, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10030461, "index_size": 12485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 5829, "raw_key_size": 58630, "raw_average_key_size": 25, "raw_value_size": 9983626, "raw_average_value_size": 4307, "num_data_blocks": 554, "num_entries": 2318, "num_filter_entries": 2318, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764063277, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:34:37.691072) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 10049839 bytes
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:34:37.691918) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 587.0 rd, 586.6 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(9.6, 0.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2577, records dropped: 259 output_compression: NoCompression
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:34:37.691935) EVENT_LOG_v1 {"time_micros": 1764063277691926, "job": 4, "event": "compaction_finished", "compaction_time_micros": 17131, "compaction_time_cpu_micros": 12892, "output_level": 6, "num_output_files": 1, "total_output_size": 10049839, "num_input_records": 2577, "num_output_records": 2318, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277693488, "job": 4, "event": "table_file_deletion", "file_number": 14}
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063277693529, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:34:37.673659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:34:37 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:38.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:39 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:39 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:39 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:39 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 25 04:34:39 np0005534696 ceph-mgr[75792]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 25 04:34:39 np0005534696 systemd[1]: session-33.scope: Deactivated successfully.
Nov 25 04:34:39 np0005534696 systemd[1]: session-33.scope: Consumed 13.125s CPU time.
Nov 25 04:34:39 np0005534696 systemd-logind[744]: Session 33 logged out. Waiting for processes to exit.
Nov 25 04:34:39 np0005534696 systemd-logind[744]: Removed session 33.
Nov 25 04:34:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: ignoring --setuser ceph since I am not root
Nov 25 04:34:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: ignoring --setgroup ceph since I am not root
Nov 25 04:34:39 np0005534696 ceph-mgr[75792]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 25 04:34:39 np0005534696 ceph-mgr[75792]: pidfile_write: ignore empty --pid-file
Nov 25 04:34:39 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'alerts'
Nov 25 04:34:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:39.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:39.797+0000 7f52e7bf5140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 04:34:39 np0005534696 ceph-mgr[75792]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 25 04:34:39 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'balancer'
Nov 25 04:34:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:39.868+0000 7f52e7bf5140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 04:34:39 np0005534696 ceph-mgr[75792]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 25 04:34:39 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'cephadm'
Nov 25 04:34:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:40.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:40 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'crash'
Nov 25 04:34:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:40.565+0000 7f52e7bf5140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 04:34:40 np0005534696 ceph-mgr[75792]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 25 04:34:40 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'dashboard'
Nov 25 04:34:40 np0005534696 ceph-mon[75508]: from='mgr.14517 192.168.122.100:0/1422506760' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 25 04:34:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'devicehealth'
Nov 25 04:34:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:41.113+0000 7f52e7bf5140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'diskprediction_local'
Nov 25 04:34:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 25 04:34:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 25 04:34:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]:  from numpy import show_config as show_numpy_config
Nov 25 04:34:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:41.254+0000 7f52e7bf5140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'influx'
Nov 25 04:34:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:41.316+0000 7f52e7bf5140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'insights'
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'iostat'
Nov 25 04:34:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:41.434+0000 7f52e7bf5140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'k8sevents'
Nov 25 04:34:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001999995s ======
Nov 25 04:34:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:41.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999995s
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'localpool'
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'mds_autoscaler'
Nov 25 04:34:41 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'mirroring'
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'nfs'
Nov 25 04:34:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:42.272+0000 7f52e7bf5140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'orchestrator'
Nov 25 04:34:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:42.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:42.458+0000 7f52e7bf5140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'osd_perf_query'
Nov 25 04:34:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:42.524+0000 7f52e7bf5140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'osd_support'
Nov 25 04:34:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:42.581+0000 7f52e7bf5140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'pg_autoscaler'
Nov 25 04:34:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:42.649+0000 7f52e7bf5140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'progress'
Nov 25 04:34:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:42.710+0000 7f52e7bf5140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 25 04:34:42 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'prometheus'
Nov 25 04:34:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:43.004+0000 7f52e7bf5140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 04:34:43 np0005534696 ceph-mgr[75792]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 25 04:34:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rbd_support'
Nov 25 04:34:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:43.087+0000 7f52e7bf5140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 04:34:43 np0005534696 ceph-mgr[75792]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 25 04:34:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'restful'
Nov 25 04:34:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rgw'
Nov 25 04:34:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:43.458+0000 7f52e7bf5140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 04:34:43 np0005534696 ceph-mgr[75792]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 25 04:34:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'rook'
Nov 25 04:34:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:43.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:43.951+0000 7f52e7bf5140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 04:34:43 np0005534696 ceph-mgr[75792]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 25 04:34:43 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'selftest'
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:44.015+0000 7f52e7bf5140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'snap_schedule'
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:44.085+0000 7f52e7bf5140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'stats'
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'status'
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:44.213+0000 7f52e7bf5140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'telegraf'
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:44.274+0000 7f52e7bf5140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'telemetry'
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:44.407+0000 7f52e7bf5140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'test_orchestrator'
Nov 25 04:34:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 25 04:34:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:44.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:44.596+0000 7f52e7bf5140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'volumes'
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:44.825+0000 7f52e7bf5140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Loading python module 'zabbix'
Nov 25 04:34:44 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 e42: 3 total, 3 up, 3 in
Nov 25 04:34:44 np0005534696 ceph-mon[75508]: Active manager daemon compute-0.zcfgby restarted
Nov 25 04:34:44 np0005534696 ceph-mon[75508]: Activating manager daemon compute-0.zcfgby
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 2025-11-25T09:34:44.889+0000 7f52e7bf5140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr load Constructed class from module: dashboard
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: mgr load Constructed class from module: prometheus
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: ms_deliver_dispatch: unhandled message 0x55959d743860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: [dashboard INFO root] Starting engine...
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: [prometheus INFO root] server_addr: :: server_port: 9283
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: [prometheus INFO root] Starting engine...
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: [25/Nov/2025:09:34:44] ENGINE Bus STARTING
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:34:44] ENGINE Bus STARTING
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: CherryPy Checker:
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: The Application mounted at '' has an empty config.
Nov 25 04:34:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: 
Nov 25 04:34:44 np0005534696 ceph-mgr[75792]: [dashboard INFO root] Engine started...
Nov 25 04:34:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: [25/Nov/2025:09:34:45] ENGINE Serving on http://:::9283
Nov 25 04:34:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mgr-compute-2-flybft[75788]: [25/Nov/2025:09:34:45] ENGINE Bus STARTED
Nov 25 04:34:45 np0005534696 ceph-mgr[75792]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:34:45] ENGINE Serving on http://:::9283
Nov 25 04:34:45 np0005534696 ceph-mgr[75792]: [prometheus INFO cherrypy.error] [25/Nov/2025:09:34:45] ENGINE Bus STARTED
Nov 25 04:34:45 np0005534696 ceph-mgr[75792]: [prometheus INFO root] Engine started.
Nov 25 04:34:45 np0005534696 systemd-logind[744]: New session 35 of user ceph-admin.
Nov 25 04:34:45 np0005534696 systemd[1]: Started Session 35 of User ceph-admin.
Nov 25 04:34:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:45.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:45 np0005534696 podman[85613]: 2025-11-25 09:34:45.837855664 +0000 UTC m=+0.038346091 container exec 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:34:45 np0005534696 ceph-mon[75508]: Manager daemon compute-0.zcfgby is now available
Nov 25 04:34:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/mirror_snapshot_schedule"}]: dispatch
Nov 25 04:34:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.zcfgby/trash_purge_schedule"}]: dispatch
Nov 25 04:34:45 np0005534696 podman[85613]: 2025-11-25 09:34:45.921899652 +0000 UTC m=+0.122390060 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:34:46 np0005534696 podman[85722]: 2025-11-25 09:34:46.264520542 +0000 UTC m=+0.034178209 container exec 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:34:46 np0005534696 podman[85722]: 2025-11-25 09:34:46.267825249 +0000 UTC m=+0.037482907 container exec_died 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:34:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:46.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:46 np0005534696 podman[85791]: 2025-11-25 09:34:46.450903604 +0000 UTC m=+0.033229281 container exec 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:34:46 np0005534696 podman[85791]: 2025-11-25 09:34:46.457830064 +0000 UTC m=+0.040155721 container exec_died 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:34:46 np0005534696 podman[85844]: 2025-11-25 09:34:46.586444267 +0000 UTC m=+0.032315282 container exec 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, distribution-scope=public, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, release=1793)
Nov 25 04:34:46 np0005534696 podman[85844]: 2025-11-25 09:34:46.597809157 +0000 UTC m=+0.043680181 container exec_died 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, version=2.2.4, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, name=keepalived, io.buildah.version=1.28.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Nov 25 04:34:46 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:34:46] ENGINE Bus STARTING
Nov 25 04:34:46 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:34:46] ENGINE Serving on http://192.168.122.100:8765
Nov 25 04:34:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:46 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:34:46] ENGINE Serving on https://192.168.122.100:7150
Nov 25 04:34:46 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:34:46] ENGINE Bus STARTED
Nov 25 04:34:46 np0005534696 ceph-mon[75508]: [25/Nov/2025:09:34:46] ENGINE Client ('192.168.122.100', 39184) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 25 04:34:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:47.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:34:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:34:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:48.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:49 np0005534696 ceph-mon[75508]: Updating compute-0:/etc/ceph/ceph.conf
Nov 25 04:34:49 np0005534696 ceph-mon[75508]: Updating compute-1:/etc/ceph/ceph.conf
Nov 25 04:34:49 np0005534696 ceph-mon[75508]: Updating compute-2:/etc/ceph/ceph.conf
Nov 25 04:34:49 np0005534696 ceph-mon[75508]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:34:49 np0005534696 ceph-mon[75508]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:34:49 np0005534696 ceph-mon[75508]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.conf
Nov 25 04:34:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 25 04:34:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: Updating compute-2:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: Updating compute-1:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: Updating compute-0:/var/lib/ceph/af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/config/ceph.client.admin.keyring
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress#012service_id: nfs.cephfs#012service_name: ingress.nfs.cephfs#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012spec:#012  backend_service: nfs.cephfs#012  enable_haproxy_protocol: true#012  first_virtual_router_id: 50#012  frontend_port: 2049#012  monitor_port: 9049#012  virtual_ip: 192.168.122.2/24#012''')): max() arg is an empty sequence#012Traceback (most recent call last):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services#012    if self._apply_service(spec):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service#012    daemon_spec = svc.prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create#012    return self.haproxy_prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create#012    daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config#012    num_ranks = 1 + max(by_rank.keys())#012ValueError: max() arg is an empty sequence
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: Creating key for client.nfs.cephfs.0.0.compute-1.yfzsxe
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:50.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: Rados config object exists: conf-nfs.cephfs
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: Creating key for client.nfs.cephfs.0.0.compute-1.yfzsxe-rgw
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: Bind address in nfs.cephfs.0.0.compute-1.yfzsxe's ganesha conf is defaulting to empty
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: Deploying daemon nfs.cephfs.0.0.compute-1.yfzsxe on compute-1
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 04:34:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.jouchy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 04:34:51 np0005534696 podman[86993]: 2025-11-25 09:34:51.486900334 +0000 UTC m=+0.026848152 container create c160b09d9058f7117e82c7bb418e9795745c3e9b8a2854297b9a5c120bf76a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 04:34:51 np0005534696 systemd[1]: Started libpod-conmon-c160b09d9058f7117e82c7bb418e9795745c3e9b8a2854297b9a5c120bf76a5e.scope.
Nov 25 04:34:51 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:34:51 np0005534696 podman[86993]: 2025-11-25 09:34:51.541021864 +0000 UTC m=+0.080969691 container init c160b09d9058f7117e82c7bb418e9795745c3e9b8a2854297b9a5c120bf76a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_antonelli, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:34:51 np0005534696 podman[86993]: 2025-11-25 09:34:51.545352123 +0000 UTC m=+0.085299940 container start c160b09d9058f7117e82c7bb418e9795745c3e9b8a2854297b9a5c120bf76a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 04:34:51 np0005534696 podman[86993]: 2025-11-25 09:34:51.546558582 +0000 UTC m=+0.086506400 container attach c160b09d9058f7117e82c7bb418e9795745c3e9b8a2854297b9a5c120bf76a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_antonelli, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 04:34:51 np0005534696 infallible_antonelli[87006]: 167 167
Nov 25 04:34:51 np0005534696 systemd[1]: libpod-c160b09d9058f7117e82c7bb418e9795745c3e9b8a2854297b9a5c120bf76a5e.scope: Deactivated successfully.
Nov 25 04:34:51 np0005534696 podman[86993]: 2025-11-25 09:34:51.548885679 +0000 UTC m=+0.088833496 container died c160b09d9058f7117e82c7bb418e9795745c3e9b8a2854297b9a5c120bf76a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_antonelli, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:34:51 np0005534696 systemd[1]: var-lib-containers-storage-overlay-900390574a3ed41a7ac2ef79058cf29ebd17c4d1fde917fc68789f0175f5b407-merged.mount: Deactivated successfully.
Nov 25 04:34:51 np0005534696 podman[86993]: 2025-11-25 09:34:51.568152966 +0000 UTC m=+0.108100783 container remove c160b09d9058f7117e82c7bb418e9795745c3e9b8a2854297b9a5c120bf76a5e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:34:51 np0005534696 podman[86993]: 2025-11-25 09:34:51.475787187 +0000 UTC m=+0.015735023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:34:51 np0005534696 systemd[1]: libpod-conmon-c160b09d9058f7117e82c7bb418e9795745c3e9b8a2854297b9a5c120bf76a5e.scope: Deactivated successfully.
Nov 25 04:34:51 np0005534696 systemd[1]: Reloading.
Nov 25 04:34:51 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:34:51 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:34:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:51.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:51 np0005534696 systemd[1]: Reloading.
Nov 25 04:34:51 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:34:51 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:34:52 np0005534696 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:34:52 np0005534696 podman[87138]: 2025-11-25 09:34:52.185095279 +0000 UTC m=+0.025889457 container create 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:34:52 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6e28e8840d8a4d9431032e595588af23fc6a3b51316de69b1878e03f31444/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:52 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6e28e8840d8a4d9431032e595588af23fc6a3b51316de69b1878e03f31444/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:52 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6e28e8840d8a4d9431032e595588af23fc6a3b51316de69b1878e03f31444/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:52 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b6e28e8840d8a4d9431032e595588af23fc6a3b51316de69b1878e03f31444/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.jouchy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:34:52 np0005534696 ceph-mon[75508]: Creating key for client.nfs.cephfs.1.0.compute-2.jouchy
Nov 25 04:34:52 np0005534696 ceph-mon[75508]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 25 04:34:52 np0005534696 ceph-mon[75508]: Rados config object exists: conf-nfs.cephfs
Nov 25 04:34:52 np0005534696 ceph-mon[75508]: Creating key for client.nfs.cephfs.1.0.compute-2.jouchy-rgw
Nov 25 04:34:52 np0005534696 ceph-mon[75508]: Bind address in nfs.cephfs.1.0.compute-2.jouchy's ganesha conf is defaulting to empty
Nov 25 04:34:52 np0005534696 ceph-mon[75508]: Deploying daemon nfs.cephfs.1.0.compute-2.jouchy on compute-2
Nov 25 04:34:52 np0005534696 podman[87138]: 2025-11-25 09:34:52.225543686 +0000 UTC m=+0.066337864 container init 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:34:52 np0005534696 podman[87138]: 2025-11-25 09:34:52.230514574 +0000 UTC m=+0.071308752 container start 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:34:52 np0005534696 bash[87138]: 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99
Nov 25 04:34:52 np0005534696 podman[87138]: 2025-11-25 09:34:52.174427844 +0000 UTC m=+0.015222042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:34:52 np0005534696 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:34:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:52 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 04:34:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:52 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 04:34:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:52 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 04:34:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:52 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 04:34:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:52 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 04:34:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:52 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 04:34:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:52 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 04:34:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:52 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:34:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:52.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:53 np0005534696 ceph-mon[75508]: Creating key for client.nfs.cephfs.2.0.compute-0.rychik
Nov 25 04:34:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 25 04:34:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 25 04:34:53 np0005534696 ceph-mon[75508]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 25 04:34:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 25 04:34:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 25 04:34:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999998s ======
Nov 25 04:34:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:53.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999998s
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000002:nfs.cephfs.1: -2
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 04:34:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:34:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 25 04:34:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 25 04:34:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 25 04:34:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rychik-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 25 04:34:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 25 04:34:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:54.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 25 04:34:55 np0005534696 ceph-mon[75508]: Rados config object exists: conf-nfs.cephfs
Nov 25 04:34:55 np0005534696 ceph-mon[75508]: Creating key for client.nfs.cephfs.2.0.compute-0.rychik-rgw
Nov 25 04:34:55 np0005534696 ceph-mon[75508]: Bind address in nfs.cephfs.2.0.compute-0.rychik's ganesha conf is defaulting to empty
Nov 25 04:34:55 np0005534696 ceph-mon[75508]: Deploying daemon nfs.cephfs.2.0.compute-0.rychik on compute-0
Nov 25 04:34:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:55 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:34:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:55 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:34:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:55 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:34:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:55 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:34:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:34:55 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:34:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:34:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:55.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:56.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:56 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:56 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:56 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:56 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:56 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:34:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:57.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:34:58.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:59 np0005534696 podman[87361]: 2025-11-25 09:34:59.289688297 +0000 UTC m=+0.037902331 container exec 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:34:59 np0005534696 podman[87361]: 2025-11-25 09:34:59.366922133 +0000 UTC m=+0.115136148 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 04:34:59 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:59 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:59 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:34:59 np0005534696 podman[87472]: 2025-11-25 09:34:59.710748941 +0000 UTC m=+0.037226276 container exec 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:34:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:34:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:34:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:34:59.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:34:59 np0005534696 podman[87495]: 2025-11-25 09:34:59.765720843 +0000 UTC m=+0.044247061 container exec_died 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:34:59 np0005534696 podman[87472]: 2025-11-25 09:34:59.768776775 +0000 UTC m=+0.095254110 container exec_died 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:34:59 np0005534696 podman[87545]: 2025-11-25 09:34:59.952749374 +0000 UTC m=+0.035122186 container exec 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:34:59 np0005534696 podman[87545]: 2025-11-25 09:34:59.960777506 +0000 UTC m=+0.043150299 container exec_died 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:35:00 np0005534696 podman[87597]: 2025-11-25 09:35:00.094085729 +0000 UTC m=+0.033409569 container exec 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.expose-services=, version=2.2.4, build-date=2023-02-22T09:23:20, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9)
Nov 25 04:35:00 np0005534696 podman[87597]: 2025-11-25 09:35:00.105809741 +0000 UTC m=+0.045133582 container exec_died 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, build-date=2023-02-22T09:23:20, distribution-scope=public, release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=2.2.4, io.openshift.tags=Ceph keepalived)
Nov 25 04:35:00 np0005534696 podman[87639]: 2025-11-25 09:35:00.209494165 +0000 UTC m=+0.034349729 container exec 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:35:00 np0005534696 podman[87639]: 2025-11-25 09:35:00.215721996 +0000 UTC m=+0.040577551 container exec_died 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 25 04:35:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:00.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:01.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:35:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:02 np0005534696 ceph-mon[75508]: Deploying daemon haproxy.nfs.cephfs.compute-1.xlgqkq on compute-1
Nov 25 04:35:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:02.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:03 np0005534696 ceph-mon[75508]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Nov 25 04:35:03 np0005534696 ceph-mon[75508]: Cluster is now healthy
Nov 25 04:35:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:03.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 25 04:35:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:04.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 25 04:35:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:05 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:05 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:05 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:05 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:05 np0005534696 ceph-mon[75508]: Deploying daemon haproxy.nfs.cephfs.compute-0.lycwwd on compute-0
Nov 25 04:35:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:05.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:05 np0005534696 podman[87755]: 2025-11-25 09:35:05.962978824 +0000 UTC m=+0.028980385 container create bbc5e0b7933aae1f2863f5fcaddc994a07db885814d27390121660db8b363ec1 (image=quay.io/ceph/haproxy:2.3, name=thirsty_swartz)
Nov 25 04:35:05 np0005534696 systemd[1]: Started libpod-conmon-bbc5e0b7933aae1f2863f5fcaddc994a07db885814d27390121660db8b363ec1.scope.
Nov 25 04:35:06 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:35:06 np0005534696 podman[87755]: 2025-11-25 09:35:06.015554579 +0000 UTC m=+0.081556151 container init bbc5e0b7933aae1f2863f5fcaddc994a07db885814d27390121660db8b363ec1 (image=quay.io/ceph/haproxy:2.3, name=thirsty_swartz)
Nov 25 04:35:06 np0005534696 podman[87755]: 2025-11-25 09:35:06.019770394 +0000 UTC m=+0.085771955 container start bbc5e0b7933aae1f2863f5fcaddc994a07db885814d27390121660db8b363ec1 (image=quay.io/ceph/haproxy:2.3, name=thirsty_swartz)
Nov 25 04:35:06 np0005534696 podman[87755]: 2025-11-25 09:35:06.02117346 +0000 UTC m=+0.087175022 container attach bbc5e0b7933aae1f2863f5fcaddc994a07db885814d27390121660db8b363ec1 (image=quay.io/ceph/haproxy:2.3, name=thirsty_swartz)
Nov 25 04:35:06 np0005534696 thirsty_swartz[87768]: 0 0
Nov 25 04:35:06 np0005534696 systemd[1]: libpod-bbc5e0b7933aae1f2863f5fcaddc994a07db885814d27390121660db8b363ec1.scope: Deactivated successfully.
Nov 25 04:35:06 np0005534696 podman[87755]: 2025-11-25 09:35:06.023086142 +0000 UTC m=+0.089087703 container died bbc5e0b7933aae1f2863f5fcaddc994a07db885814d27390121660db8b363ec1 (image=quay.io/ceph/haproxy:2.3, name=thirsty_swartz)
Nov 25 04:35:06 np0005534696 systemd[1]: var-lib-containers-storage-overlay-53db16f9c534a12e8b0a387233180ee8868749b4f974525723703e4cd942738b-merged.mount: Deactivated successfully.
Nov 25 04:35:06 np0005534696 podman[87755]: 2025-11-25 09:35:05.95010604 +0000 UTC m=+0.016107621 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 25 04:35:06 np0005534696 podman[87755]: 2025-11-25 09:35:06.056134754 +0000 UTC m=+0.122136314 container remove bbc5e0b7933aae1f2863f5fcaddc994a07db885814d27390121660db8b363ec1 (image=quay.io/ceph/haproxy:2.3, name=thirsty_swartz)
Nov 25 04:35:06 np0005534696 systemd[1]: libpod-conmon-bbc5e0b7933aae1f2863f5fcaddc994a07db885814d27390121660db8b363ec1.scope: Deactivated successfully.
Nov 25 04:35:06 np0005534696 systemd[1]: Reloading.
Nov 25 04:35:06 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:35:06 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:35:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:06 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:06 np0005534696 systemd[1]: Reloading.
Nov 25 04:35:06 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:35:06 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:35:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:06.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:06 np0005534696 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-2.flyakz for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:35:06 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:06 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:06 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:06 np0005534696 ceph-mon[75508]: Deploying daemon haproxy.nfs.cephfs.compute-2.flyakz on compute-2
Nov 25 04:35:06 np0005534696 podman[87903]: 2025-11-25 09:35:06.665380424 +0000 UTC m=+0.031033170 container create ab31ca55f63bb079170136f501492c375bb93d1b83f38f7e16e6472b3b01a138 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz)
Nov 25 04:35:06 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84fa9aa9cf91fb25f3ea7fd330b2468548cba309466e21d4ab3676ead5bc96bb/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 25 04:35:06 np0005534696 podman[87903]: 2025-11-25 09:35:06.698253147 +0000 UTC m=+0.063905892 container init ab31ca55f63bb079170136f501492c375bb93d1b83f38f7e16e6472b3b01a138 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz)
Nov 25 04:35:06 np0005534696 podman[87903]: 2025-11-25 09:35:06.702210236 +0000 UTC m=+0.067862991 container start ab31ca55f63bb079170136f501492c375bb93d1b83f38f7e16e6472b3b01a138 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz)
Nov 25 04:35:06 np0005534696 bash[87903]: ab31ca55f63bb079170136f501492c375bb93d1b83f38f7e16e6472b3b01a138
Nov 25 04:35:06 np0005534696 podman[87903]: 2025-11-25 09:35:06.650387581 +0000 UTC m=+0.016040346 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 25 04:35:06 np0005534696 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-2.flyakz for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:35:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [NOTICE] 328/093506 (2) : New worker #1 (4) forked
Nov 25 04:35:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093506 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:35:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:07 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb80023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:07 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc002010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:07 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:07 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:07 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:07 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:07 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 04:35:07 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 04:35:07 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 04:35:07 np0005534696 ceph-mon[75508]: Deploying daemon keepalived.nfs.cephfs.compute-0.kkgeot on compute-0
Nov 25 04:35:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 25 04:35:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:07.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 25 04:35:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:08 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0002040 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:08 np0005534696 podman[88009]: 2025-11-25 09:35:08.272394995 +0000 UTC m=+0.025971782 container create 2e1cc2efbb6600bd980c73017d5733d8544c45f30d8e42cd686acc952644aeea (image=quay.io/ceph/keepalived:2.2.4, name=eloquent_kalam, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, release=1793, description=keepalived for Ceph)
Nov 25 04:35:08 np0005534696 systemd[1]: Started libpod-conmon-2e1cc2efbb6600bd980c73017d5733d8544c45f30d8e42cd686acc952644aeea.scope.
Nov 25 04:35:08 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:35:08 np0005534696 podman[88009]: 2025-11-25 09:35:08.328213864 +0000 UTC m=+0.081790640 container init 2e1cc2efbb6600bd980c73017d5733d8544c45f30d8e42cd686acc952644aeea (image=quay.io/ceph/keepalived:2.2.4, name=eloquent_kalam, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.buildah.version=1.28.2, version=2.2.4, release=1793, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived)
Nov 25 04:35:08 np0005534696 podman[88009]: 2025-11-25 09:35:08.333262706 +0000 UTC m=+0.086839474 container start 2e1cc2efbb6600bd980c73017d5733d8544c45f30d8e42cd686acc952644aeea (image=quay.io/ceph/keepalived:2.2.4, name=eloquent_kalam, io.openshift.tags=Ceph keepalived, name=keepalived, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, com.redhat.component=keepalived-container, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 25 04:35:08 np0005534696 podman[88009]: 2025-11-25 09:35:08.335916305 +0000 UTC m=+0.089493083 container attach 2e1cc2efbb6600bd980c73017d5733d8544c45f30d8e42cd686acc952644aeea (image=quay.io/ceph/keepalived:2.2.4, name=eloquent_kalam, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vcs-type=git, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=)
Nov 25 04:35:08 np0005534696 eloquent_kalam[88022]: 0 0
Nov 25 04:35:08 np0005534696 systemd[1]: libpod-2e1cc2efbb6600bd980c73017d5733d8544c45f30d8e42cd686acc952644aeea.scope: Deactivated successfully.
Nov 25 04:35:08 np0005534696 conmon[88022]: conmon 2e1cc2efbb6600bd980c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e1cc2efbb6600bd980c73017d5733d8544c45f30d8e42cd686acc952644aeea.scope/container/memory.events
Nov 25 04:35:08 np0005534696 podman[88009]: 2025-11-25 09:35:08.337572637 +0000 UTC m=+0.091149415 container died 2e1cc2efbb6600bd980c73017d5733d8544c45f30d8e42cd686acc952644aeea (image=quay.io/ceph/keepalived:2.2.4, name=eloquent_kalam, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, architecture=x86_64, com.redhat.component=keepalived-container, vcs-type=git, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived)
Nov 25 04:35:08 np0005534696 systemd[1]: var-lib-containers-storage-overlay-5c52edecee1c9fa3777b658b7116546067df27da47442fc48e8b445f949b00d1-merged.mount: Deactivated successfully.
Nov 25 04:35:08 np0005534696 podman[88009]: 2025-11-25 09:35:08.358048166 +0000 UTC m=+0.111624943 container remove 2e1cc2efbb6600bd980c73017d5733d8544c45f30d8e42cd686acc952644aeea (image=quay.io/ceph/keepalived:2.2.4, name=eloquent_kalam, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 04:35:08 np0005534696 podman[88009]: 2025-11-25 09:35:08.262068799 +0000 UTC m=+0.015645586 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 25 04:35:08 np0005534696 systemd[1]: libpod-conmon-2e1cc2efbb6600bd980c73017d5733d8544c45f30d8e42cd686acc952644aeea.scope: Deactivated successfully.
Nov 25 04:35:08 np0005534696 systemd[1]: Reloading.
Nov 25 04:35:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:08.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:08 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:35:08 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:35:08 np0005534696 systemd[1]: Reloading.
Nov 25 04:35:08 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:35:08 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:35:08 np0005534696 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-2.opynes for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:35:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:08 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 04:35:08 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 04:35:08 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 04:35:08 np0005534696 ceph-mon[75508]: Deploying daemon keepalived.nfs.cephfs.compute-2.opynes on compute-2
Nov 25 04:35:08 np0005534696 podman[88155]: 2025-11-25 09:35:08.974585751 +0000 UTC m=+0.027846853 container create c2e2a9cc05a5c774b273a97c500e0a3173ef798c92c3ea799508b8605830eab5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes, distribution-scope=public, release=1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container)
Nov 25 04:35:09 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d81095aaab1c0d9350b87d15f30789ee6114aee23876842a2467dcfd2ec76d/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:35:09 np0005534696 podman[88155]: 2025-11-25 09:35:09.009685372 +0000 UTC m=+0.062946494 container init c2e2a9cc05a5c774b273a97c500e0a3173ef798c92c3ea799508b8605830eab5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes, description=keepalived for Ceph, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, name=keepalived, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.28.2, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, release=1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 04:35:09 np0005534696 podman[88155]: 2025-11-25 09:35:09.013553836 +0000 UTC m=+0.066814938 container start c2e2a9cc05a5c774b273a97c500e0a3173ef798c92c3ea799508b8605830eab5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes, version=2.2.4, build-date=2023-02-22T09:23:20, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.buildah.version=1.28.2, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, release=1793, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container)
Nov 25 04:35:09 np0005534696 bash[88155]: c2e2a9cc05a5c774b273a97c500e0a3173ef798c92c3ea799508b8605830eab5
Nov 25 04:35:09 np0005534696 podman[88155]: 2025-11-25 09:35:08.963420534 +0000 UTC m=+0.016681656 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 25 04:35:09 np0005534696 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-2.opynes for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:09 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:09 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:09 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:09 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:09 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:09 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:09 2025: Starting VRRP child process, pid=4
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:09 2025: Startup complete
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:09 2025: (VI_0) Entering BACKUP STATE (init)
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:09 2025: VRRP_Script(check_backend) succeeded
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:09 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:09 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:09.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:10 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:10 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:10 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:10 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 25 04:35:10 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 25 04:35:10 np0005534696 ceph-mon[75508]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 25 04:35:10 np0005534696 ceph-mon[75508]: Deploying daemon keepalived.nfs.cephfs.compute-1.adsqcr on compute-1
Nov 25 04:35:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:10 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc002010 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:10.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:11 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0002b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:11 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999998s ======
Nov 25 04:35:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:11.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999998s
Nov 25 04:35:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:12 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:12.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:13 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc0091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:13 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0002b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999998s ======
Nov 25 04:35:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:13.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999998s
Nov 25 04:35:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:35:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:14 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:14.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:15 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:15 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc009330 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:15.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:15 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:15 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:16 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0002b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:16.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:16 np0005534696 podman[88314]: 2025-11-25 09:35:16.764649759 +0000 UTC m=+0.037384922 container exec 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 25 04:35:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:16 np0005534696 podman[88314]: 2025-11-25 09:35:16.844867523 +0000 UTC m=+0.117602666 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:35:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:17 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:17 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:17 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:17 np0005534696 podman[88426]: 2025-11-25 09:35:17.202111313 +0000 UTC m=+0.036434383 container exec 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:35:17 np0005534696 podman[88426]: 2025-11-25 09:35:17.209846366 +0000 UTC m=+0.044169436 container exec_died 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:35:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:17 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:17 np0005534696 podman[88496]: 2025-11-25 09:35:17.407184982 +0000 UTC m=+0.039658760 container exec 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:35:17 np0005534696 podman[88496]: 2025-11-25 09:35:17.412974283 +0000 UTC m=+0.045448060 container exec_died 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:35:17 np0005534696 podman[88548]: 2025-11-25 09:35:17.553308501 +0000 UTC m=+0.036024686 container exec 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1793)
Nov 25 04:35:17 np0005534696 podman[88548]: 2025-11-25 09:35:17.563842786 +0000 UTC m=+0.046558971 container exec_died 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, release=1793, vcs-type=git, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=keepalived)
Nov 25 04:35:17 np0005534696 podman[88587]: 2025-11-25 09:35:17.668674077 +0000 UTC m=+0.034879321 container exec 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:35:17 np0005534696 podman[88587]: 2025-11-25 09:35:17.67886007 +0000 UTC m=+0.045065314 container exec_died 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 04:35:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:17.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:17 np0005534696 systemd-logind[744]: New session 36 of user zuul.
Nov 25 04:35:17 np0005534696 systemd[1]: Started Session 36 of User zuul.
Nov 25 04:35:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:18 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc009c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999998s ======
Nov 25 04:35:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:18.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999998s
Nov 25 04:35:18 np0005534696 python3.9[88798]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:35:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:19 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0003ff0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:19 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:19 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:19 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:19 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:35:19 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:19 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:19 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:35:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:19.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:19 np0005534696 python3.9[89039]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:35:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:20 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:35:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:20 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:20.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:21 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc009c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:21 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0003ff0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:21.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:22 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 25 04:35:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:22.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.648912) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322648929, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1757, "num_deletes": 251, "total_data_size": 6621419, "memory_usage": 6937984, "flush_reason": "Manual Compaction"}
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322656466, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 4023900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 5501, "largest_seqno": 7253, "table_properties": {"data_size": 4016908, "index_size": 3742, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17514, "raw_average_key_size": 20, "raw_value_size": 4001389, "raw_average_value_size": 4690, "num_data_blocks": 170, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063277, "oldest_key_time": 1764063277, "file_creation_time": 1764063322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7581 microseconds, and 5703 cpu microseconds.
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.656492) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 4023900 bytes OK
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.656503) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.656817) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.656829) EVENT_LOG_v1 {"time_micros": 1764063322656826, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.656839) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 6612733, prev total WAL file size 6612733, number of live WAL files 2.
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.657683) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3929KB)], [15(9814KB)]
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322657698, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 14073739, "oldest_snapshot_seqno": -1}
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 2633 keys, 12687944 bytes, temperature: kUnknown
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322685871, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 12687944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12666339, "index_size": 13944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6597, "raw_key_size": 66564, "raw_average_key_size": 25, "raw_value_size": 12613594, "raw_average_value_size": 4790, "num_data_blocks": 618, "num_entries": 2633, "num_filter_entries": 2633, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764063322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.686001) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12687944 bytes
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.686325) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 498.9 rd, 449.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 9.6 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(6.7) write-amplify(3.2) OK, records in: 3171, records dropped: 538 output_compression: NoCompression
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.686339) EVENT_LOG_v1 {"time_micros": 1764063322686333, "job": 6, "event": "compaction_finished", "compaction_time_micros": 28211, "compaction_time_cpu_micros": 16209, "output_level": 6, "num_output_files": 1, "total_output_size": 12687944, "num_input_records": 3171, "num_output_records": 2633, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322686824, "job": 6, "event": "table_file_deletion", "file_number": 17}
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063322687858, "job": 6, "event": "table_file_deletion", "file_number": 15}
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.657657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.687922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.687925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.687927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.687928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:35:22 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:35:22.687929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:35:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:23 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:35:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:23 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:35:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:23 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:23 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc009c50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:23 np0005534696 ceph-mon[75508]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Nov 25 04:35:23 np0005534696 ceph-mon[75508]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Nov 25 04:35:23 np0005534696 ceph-mon[75508]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Nov 25 04:35:23 np0005534696 ceph-mon[75508]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Nov 25 04:35:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:23.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:24 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0003ff0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:24.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:24 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:24 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:25 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:25 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:25 np0005534696 ceph-mon[75508]: Reconfiguring grafana.compute-0 (dependencies changed)...
Nov 25 04:35:25 np0005534696 ceph-mon[75508]: Reconfiguring daemon grafana.compute-0 on compute-0
Nov 25 04:35:25 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:25 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999998s ======
Nov 25 04:35:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:25.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999998s
Nov 25 04:35:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:26 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:35:26 np0005534696 podman[89232]: 2025-11-25 09:35:26.210296359 +0000 UTC m=+0.035248702 container exec 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 25 04:35:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:26 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc00b0e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:26 np0005534696 podman[89232]: 2025-11-25 09:35:26.284193878 +0000 UTC m=+0.109146241 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:35:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:26.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:26 np0005534696 systemd[1]: session-36.scope: Deactivated successfully.
Nov 25 04:35:26 np0005534696 systemd[1]: session-36.scope: Consumed 6.377s CPU time.
Nov 25 04:35:26 np0005534696 systemd-logind[744]: Session 36 logged out. Waiting for processes to exit.
Nov 25 04:35:26 np0005534696 systemd-logind[744]: Removed session 36.
Nov 25 04:35:26 np0005534696 podman[89343]: 2025-11-25 09:35:26.62233518 +0000 UTC m=+0.034733007 container exec 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:35:26 np0005534696 ceph-mon[75508]: Reconfiguring node-exporter.compute-1 (unknown last config time)...
Nov 25 04:35:26 np0005534696 ceph-mon[75508]: Reconfiguring daemon node-exporter.compute-1 on compute-1
Nov 25 04:35:26 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:26 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:26 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Nov 25 04:35:26 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:26 np0005534696 podman[89363]: 2025-11-25 09:35:26.679739937 +0000 UTC m=+0.045515847 container exec_died 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:35:26 np0005534696 podman[89343]: 2025-11-25 09:35:26.681948112 +0000 UTC m=+0.094345938 container exec_died 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:35:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:26 np0005534696 podman[89414]: 2025-11-25 09:35:26.868772551 +0000 UTC m=+0.035777673 container exec 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:35:26 np0005534696 podman[89432]: 2025-11-25 09:35:26.925748525 +0000 UTC m=+0.045872475 container exec_died 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:35:26 np0005534696 podman[89414]: 2025-11-25 09:35:26.928411572 +0000 UTC m=+0.095416674 container exec_died 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:35:27 np0005534696 podman[89466]: 2025-11-25 09:35:27.066790128 +0000 UTC m=+0.039591304 container exec 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 25 04:35:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:27 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:27 np0005534696 podman[89484]: 2025-11-25 09:35:27.128748352 +0000 UTC m=+0.045336741 container exec_died 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, vcs-type=git, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, name=keepalived)
Nov 25 04:35:27 np0005534696 podman[89466]: 2025-11-25 09:35:27.130733259 +0000 UTC m=+0.103534434 container exec_died 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, description=keepalived for Ceph, com.redhat.component=keepalived-container, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, release=1793, vendor=Red Hat, Inc.)
Nov 25 04:35:27 np0005534696 podman[89507]: 2025-11-25 09:35:27.233333883 +0000 UTC m=+0.034786928 container exec 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 25 04:35:27 np0005534696 podman[89507]: 2025-11-25 09:35:27.243863579 +0000 UTC m=+0.045316604 container exec_died 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:35:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:27 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:27.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:35:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:35:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:28 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000999997s ======
Nov 25 04:35:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:28.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999997s
Nov 25 04:35:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:29 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc00b0e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:29 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:29.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:30 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:30.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:31 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:31 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc00b0e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:31.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:31 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:31 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:35:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:32 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:32.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093532 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:35:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:33 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:33 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:33.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:34 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc00b0e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:34.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:35 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:35 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:35.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:36 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:36.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:37 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc001320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:37 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed0003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:35:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:37.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:35:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:38 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:38.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:39 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:39 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc001320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:39.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:40 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc001320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:40.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:41 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:41 np0005534696 systemd-logind[744]: New session 37 of user zuul.
Nov 25 04:35:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:41 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:41 np0005534696 systemd[1]: Started Session 37 of User zuul.
Nov 25 04:35:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:35:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:41.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:35:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:41 np0005534696 python3.9[89781]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 04:35:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:42 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed0004360 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:42.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:42 np0005534696 python3.9[89955]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:35:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:43 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc001320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:43 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:43 np0005534696 python3.9[90112]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:35:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:43.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:44 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:44 np0005534696 python3.9[90266]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:35:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:44.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:45 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed0004c80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e43 e43: 3 total, 3 up, 3 in
Nov 25 04:35:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 04:35:45 np0005534696 python3.9[90421]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:35:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:45 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc001320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:35:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:45.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:35:45 np0005534696 python3.9[90573]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:35:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:46 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e44 e44: 3 total, 3 up, 3 in
Nov 25 04:35:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 25 04:35:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 04:35:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 25 04:35:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:46 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:46 np0005534696 python3.9[90724]: ansible-ansible.builtin.service_facts Invoked
Nov 25 04:35:46 np0005534696 network[90741]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 04:35:46 np0005534696 network[90742]: 'network-scripts' will be removed from distribution in near future.
Nov 25 04:35:46 np0005534696 network[90743]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 04:35:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:46.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:47 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:47 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e45 e45: 3 total, 3 up, 3 in
Nov 25 04:35:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 04:35:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 04:35:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 04:35:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 25 04:35:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 04:35:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 04:35:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:47 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:47.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:47 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 45 pg[3.0( empty local-lis/les=25/26 n=0 ec=14/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=12.260632515s) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active pruub 164.691040039s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:35:47 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 45 pg[3.0( empty local-lis/les=25/26 n=0 ec=14/14 lis/c=25/25 les/c/f=26/26/0 sis=45 pruub=12.260632515s) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown pruub 164.691040039s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e46 e46: 3 total, 3 up, 3 in
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1c( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.b( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1f( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.a( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.4( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.3( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.2( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.6( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.c( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.d( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.9( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.f( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.10( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.12( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.13( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.15( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.17( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.14( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.19( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1a( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1b( empty local-lis/les=25/26 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 04:35:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.0( empty local-lis/les=45/46 n=0 ec=14/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1a( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=25/25 les/c/f=26/26/0 sis=45) [2] r=0 lpr=45 pi=[25,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:48 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc001320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:48.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 25 04:35:48 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 25 04:35:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:48 np0005534696 python3.9[91005]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:35:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:49 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:49 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e47 e47: 3 total, 3 up, 3 in
Nov 25 04:35:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 25 04:35:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 04:35:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 04:35:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 25 04:35:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:49 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed0004c80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:49 np0005534696 python3.9[91156]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:35:49 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Nov 25 04:35:49 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Nov 25 04:35:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:49.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e48 e48: 3 total, 3 up, 3 in
Nov 25 04:35:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:50 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 04:35:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 04:35:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 04:35:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 25 04:35:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:50.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:50 np0005534696 python3.9[91311]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:35:50 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 25 04:35:50 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 25 04:35:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:51 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:51 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e49 e49: 3 total, 3 up, 3 in
Nov 25 04:35:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 04:35:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 04:35:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 25 04:35:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 25 04:35:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 04:35:51 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 25 04:35:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:51 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:51 np0005534696 python3.9[91470]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:35:51 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 25 04:35:51 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 25 04:35:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:51.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:52 np0005534696 python3.9[91555]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:35:52 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e50 e50: 3 total, 3 up, 3 in
Nov 25 04:35:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:52 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed0004e20 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:52 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 04:35:52 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 25 04:35:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:52.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:52 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 25 04:35:52 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 25 04:35:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:53 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc001320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 47 pg[5.0( empty local-lis/les=25/26 n=0 ec=16/16 lis/c=25/25 les/c/f=26/26/0 sis=47 pruub=14.882322311s) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active pruub 172.691162109s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.0( empty local-lis/les=25/26 n=0 ec=16/16 lis/c=25/25 les/c/f=26/26/0 sis=47 pruub=14.882322311s) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown pruub 172.691162109s@ mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.6( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.8( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.a( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.b( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.d( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.e( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.10( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.17( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.1b( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.1c( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 50 pg[5.1f( empty local-lis/les=25/26 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:35:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e51 e51: 3 total, 3 up, 3 in
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.1d( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.1f( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.1e( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.10( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.14( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.13( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.11( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.15( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.17( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.9( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.12( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.8( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.b( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.a( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.0( empty local-lis/les=47/51 n=0 ec=16/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.1c( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.7( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.4( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.5( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.2( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.3( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.16( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.e( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.d( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.c( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.6( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.1( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.18( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.1b( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.1a( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.f( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 51 pg[5.19( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=25/25 les/c/f=26/26/0 sis=47) [2] r=0 lpr=47 pi=[25,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:35:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 04:35:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 04:35:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 04:35:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 25 04:35:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 04:35:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 04:35:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:53 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 25 04:35:53 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 25 04:35:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:35:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:53.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:35:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:54 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:54 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e52 e52: 3 total, 3 up, 3 in
Nov 25 04:35:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 04:35:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 25 04:35:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:35:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:54.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:35:54 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 25 04:35:54 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 25 04:35:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:55 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e53 e53: 3 total, 3 up, 3 in
Nov 25 04:35:55 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 25 04:35:55 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 04:35:55 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 04:35:55 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 25 04:35:55 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 04:35:55 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 04:35:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:55 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc001320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:55 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 25 04:35:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:35:55 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 25 04:35:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:55.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:56 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:56 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e54 e54: 3 total, 3 up, 3 in
Nov 25 04:35:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:35:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:56.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:35:56 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 25 04:35:56 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 25 04:35:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:57 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed0005fb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:57 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 25 04:35:57 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e55 e55: 3 total, 3 up, 3 in
Nov 25 04:35:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:57 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:57 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 25 04:35:57 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 25 04:35:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:35:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:57.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:35:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:58 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc001320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 25 04:35:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:35:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:35:58.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:35:58 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 25 04:35:58 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 25 04:35:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:59 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:59 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e56 e56: 3 total, 3 up, 3 in
Nov 25 04:35:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:35:59 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:35:59 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 25 04:35:59 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 25 04:35:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:35:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:35:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:35:59.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:35:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:35:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:35:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:35:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:00 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:36:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:36:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:00.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:36:00 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 25 04:36:00 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 25 04:36:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:01 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc001320 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 04:36:01 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e57 e57: 3 total, 3 up, 3 in
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.11( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.1c( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.1d( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.13( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.a( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.4( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.b( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[10.3( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[10.1( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.7( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.5( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[7.5( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.9( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.d( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.3( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.c( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[10.4( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.2( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[7.a( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.f( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[7.14( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.1e( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.10( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[7.16( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.13( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.1d( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.12( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.15( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[7.11( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.1a( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.18( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[10.11( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[12.17( empty local-lis/les=0/0 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.18( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[7.1d( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[2.1b( empty local-lis/les=0/0 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[7.1f( empty local-lis/les=0/0 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1d( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924955368s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.867813110s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1d( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924934387s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.867813110s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1c( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.926416397s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869400024s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1c( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.926401138s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869400024s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1f( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924758911s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.867843628s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1f( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924747467s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.867843628s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.912719727s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.855880737s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.912710190s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.855880737s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1e( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924630165s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.867858887s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1e( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924623489s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.867858887s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.912556648s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.855865479s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.912549019s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.855865479s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.912286758s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.855682373s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.912277222s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.855682373s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.11( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.925532341s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869003296s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.11( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.925524712s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869003296s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.912363052s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.855911255s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.912355423s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.855911255s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.10( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.925393105s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869003296s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.10( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.925385475s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869003296s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.912191391s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.855895996s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.912183762s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.855895996s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.911880493s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.855667114s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.911872864s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.855667114s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.15( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.925168991s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869018555s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.15( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.925162315s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869018555s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.911745071s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.855667114s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.911736488s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.855667114s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.14( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.925012589s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869003296s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.14( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.925004959s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869003296s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.17( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924970627s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869033813s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.17( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924962044s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869033813s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.911200523s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.855331421s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.911191940s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.855331421s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.16( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.925469398s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869674683s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.16( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.925461769s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869674683s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.911041260s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.855316162s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.911034584s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.855316162s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.9( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924725533s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869064331s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.9( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924700737s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869064331s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.907902718s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.852371216s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.907894135s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.852371216s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.907785416s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.852355957s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.907775879s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.852355957s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.a( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924710274s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869354248s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.a( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924702644s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869354248s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.6( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924998283s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869705200s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.6( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924989700s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869705200s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.910559654s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.855331421s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.910527229s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.855316162s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.910540581s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.855331421s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.910518646s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.855316162s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.7( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924581528s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869445801s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.7( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924572945s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869445801s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.907431602s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.852340698s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.907381058s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.852340698s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.907395363s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.852340698s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.907371521s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.852340698s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.5( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924420357s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869461060s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.5( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924412727s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869461060s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.906929970s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.852020264s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.2( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924544334s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869659424s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.906916618s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.852020264s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.2( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924535751s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869659424s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.3( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924453735s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869674683s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.906796455s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.852020264s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.3( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924443245s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869674683s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.906785011s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.852020264s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924427986s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869720459s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924418449s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869720459s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.f( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924137115s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869674683s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.f( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.924085617s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869674683s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.906021118s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.851745605s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.905979156s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.851745605s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.c( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.923849106s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869689941s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.c( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.923838615s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869689941s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.906092644s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.852020264s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.906082153s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.852020264s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.906250954s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.852340698s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1b( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.923787117s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869888306s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.906237602s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.852340698s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.1b( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.923775673s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869888306s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.18( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.923475266s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869720459s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.18( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.923466682s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869720459s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.905460358s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.851745605s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.905447006s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.851745605s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.19( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.923575401s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 181.869903564s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[5.19( empty local-lis/les=47/51 n=0 ec=47/16 lis/c=47/47 les/c/f=51/51/0 sis=57 pruub=15.923567772s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.869903564s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.904625893s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.851730347s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.904611588s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.851730347s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.904284477s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 176.851745605s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/14 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=10.904262543s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 176.851745605s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.11( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[11.13( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.1d( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.1f( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.1c( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.16( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.19( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[11.16( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.15( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.6( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.a( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.b( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[6.7( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.9( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[11.a( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.8( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[6.1( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.3( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[11.8( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[6.3( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.1( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.d( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[11.e( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.3( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[11.3( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.f( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.2( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.c( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.5( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.9( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.6( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.2( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.1c( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[8.1f( empty local-lis/les=0/0 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.14( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[4.15( empty local-lis/les=0/0 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[11.17( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 57 pg[11.19( empty local-lis/les=0/0 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:01 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed4002600 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 25 04:36:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 25 04:36:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:01.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:02 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8006f00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 04:36:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e58 e58: 3 total, 3 up, 3 in
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[11.16( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[11.17( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.19( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.11( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.16( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.15( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.13( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[11.3( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.a( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.4( v 42'64 (0'0,42'64] local-lis/les=57/58 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[6.f( v 42'42 lc 41'1 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.b( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[10.3( v 54'99 lc 42'84 (0'0,54'99] local-lis/les=57/58 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=54'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.2( v 28'12 (0'0,28'12] local-lis/les=57/58 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.3( v 28'12 (0'0,28'12] local-lis/les=57/58 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.1d( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[6.d( v 42'42 lc 41'7 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.1c( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[10.1( v 42'96 (0'0,42'96] local-lis/les=57/58 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.7( v 42'64 lc 42'33 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.c( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.5( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[6.1( v 42'42 (0'0,42'42] local-lis/les=57/58 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.3( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.9( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[11.a( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.2( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[7.5( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.6( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.a( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[6.5( v 42'42 lc 41'6 (0'0,42'42] local-lis/les=57/58 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[6.7( v 42'42 lc 41'14 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.b( v 28'12 lc 28'8 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[11.8( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.d( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.9( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[11.e( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[10.f( v 42'96 (0'0,42'96] local-lis/les=57/58 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[6.3( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=57/58 n=2 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=42'42 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.1( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.d( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.3( v 42'64 (0'0,42'64] local-lis/les=57/58 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.f( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.c( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.9( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.6( v 28'12 lc 0'0 (0'0,28'12] local-lis/les=57/58 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[6.b( v 42'42 lc 0'0 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=49/49 les/c/f=51/51/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=42'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.f( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[7.a( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.5( v 28'12 (0'0,28'12] local-lis/les=57/58 n=1 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.8( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[11.19( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[7.14( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[10.4( v 42'96 (0'0,42'96] local-lis/les=57/58 n=1 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.10( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.15( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[7.16( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.1e( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.13( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.14( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.2( v 42'64 (0'0,42'64] local-lis/les=57/58 n=1 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.1d( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[7.11( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.1a( v 56'67 lc 42'59 (0'0,56'67] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=56'67 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.15( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[10.1e( v 42'96 (0'0,42'96] local-lis/les=57/58 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.18( v 42'64 lc 42'20 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[10.11( v 42'96 (0'0,42'96] local-lis/les=57/58 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.12( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.1f( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[12.17( v 42'64 (0'0,42'64] local-lis/les=57/58 n=0 ec=55/39 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=42'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.1c( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[7.1d( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[10.10( v 42'96 (0'0,42'96] local-lis/les=57/58 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.1b( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[2.18( empty local-lis/les=57/58 n=0 ec=45/12 lis/c=45/45 les/c/f=46/46/0 sis=57) [2] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.1f( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[8.11( v 28'12 (0'0,28'12] local-lis/les=57/58 n=0 ec=51/27 lis/c=51/51 les/c/f=52/52/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=28'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[7.1f( empty local-lis/les=57/58 n=0 ec=49/18 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[11.13( v 42'2 (0'0,42'2] local-lis/les=57/58 n=0 ec=53/33 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.1d( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[10.12( v 42'96 (0'0,42'96] local-lis/les=57/58 n=0 ec=53/31 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=42'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 58 pg[4.1c( empty local-lis/les=57/58 n=0 ec=47/15 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:36:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:02.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:36:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:03 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:03 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e59 e59: 3 total, 3 up, 3 in
Nov 25 04:36:03 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 04:36:03 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 25 04:36:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:03 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efebc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:03.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:04 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed4003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 04:36:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 25 04:36:04 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e60 e60: 3 total, 3 up, 3 in
Nov 25 04:36:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:36:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:04.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:36:04 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 25 04:36:04 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 25 04:36:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:05 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8006f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e61 e61: 3 total, 3 up, 3 in
Nov 25 04:36:05 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.983806610s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=42'42 mlcod 42'42 active pruub 182.946441650s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:05 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[6.f( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.983759880s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 182.946441650s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:05 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[6.3( v 42'42 (0'0,42'42] local-lis/les=57/58 n=2 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.983760834s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=42'42 mlcod 42'42 active pruub 182.946929932s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:05 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[6.3( v 42'42 (0'0,42'42] local-lis/les=57/58 n=2 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.983734131s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 182.946929932s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:05 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[6.b( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.983792305s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=42'42 mlcod 42'42 active pruub 182.947067261s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:05 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[6.7( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.983304024s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=42'42 mlcod 42'42 active pruub 182.946746826s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:05 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 04:36:05 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 25 04:36:05 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[6.7( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.982976913s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 182.946746826s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:05 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[6.b( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=12.983777046s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 182.947067261s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:05 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:05 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 25 04:36:05 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 25 04:36:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:05.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61) [2] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61) [2] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[9.b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61) [2] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61) [2] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61) [2] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[9.3( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61) [2] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61) [2] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 61 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=61) [2] r=0 lpr=61 pi=[51,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093606 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:36:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:06 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:06 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 04:36:06 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 25 04:36:06 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e62 e62: 3 total, 3 up, 3 in
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.3( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 62 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:06.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.1a scrub starts
Nov 25 04:36:06 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.1a scrub ok
Nov 25 04:36:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:07 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed4003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:07 np0005534696 systemd[82992]: Starting Mark boot as successful...
Nov 25 04:36:07 np0005534696 systemd[82992]: Finished Mark boot as successful.
Nov 25 04:36:07 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e63 e63: 3 total, 3 up, 3 in
Nov 25 04:36:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:07 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:07 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.18 scrub starts
Nov 25 04:36:07 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.18 scrub ok
Nov 25 04:36:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:07.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:08 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:08 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e64 e64: 3 total, 3 up, 3 in
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.17( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.17( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.3( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.3( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.7( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.7( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.13( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 64 pg[9.13( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:08.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 25 04:36:08 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 25 04:36:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:09 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:09 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e65 e65: 3 total, 3 up, 3 in
Nov 25 04:36:09 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 65 pg[9.17( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:09 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 65 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:09 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 65 pg[9.3( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:09 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 65 pg[9.7( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:09 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 65 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:09 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 65 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:09 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 65 pg[9.13( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:09 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 65 pg[9.b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=62/51 les/c/f=63/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:09 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:09 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Nov 25 04:36:09 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Nov 25 04:36:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:09.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:10 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed4003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:10 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:36:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:10.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:10 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 25 04:36:10 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 25 04:36:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:11 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efedc0c5a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:11 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e66 e66: 3 total, 3 up, 3 in
Nov 25 04:36:11 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 04:36:11 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 25 04:36:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:11 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:11 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 25 04:36:11 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 25 04:36:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:11.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:12 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:12 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 04:36:12 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 25 04:36:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:12.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:12 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.11 deep-scrub starts
Nov 25 04:36:12 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.11 deep-scrub ok
Nov 25 04:36:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:13 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed40045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:13 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e67 e67: 3 total, 3 up, 3 in
Nov 25 04:36:13 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 67 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:13 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 04:36:13 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 25 04:36:13 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 67 pg[9.5( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:13 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 67 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:13 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 67 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:13 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 67 pg[6.d( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=12.915103912s) [0] r=-1 lpr=67 pi=[57,67)/1 crt=42'42 mlcod 42'42 active pruub 190.946655273s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:13 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 67 pg[6.d( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=12.915080070s) [0] r=-1 lpr=67 pi=[57,67)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 190.946655273s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:13 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efedc0c4a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:13 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 67 pg[6.5( v 42'42 (0'0,42'42] local-lis/les=57/58 n=2 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=12.914989471s) [0] r=-1 lpr=67 pi=[57,67)/1 crt=42'42 mlcod 42'42 active pruub 190.946960449s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:13 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 67 pg[6.5( v 42'42 (0'0,42'42] local-lis/les=57/58 n=2 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=67 pruub=12.914954185s) [0] r=-1 lpr=67 pi=[57,67)/1 crt=42'42 mlcod 0'0 unknown NOTIFY pruub 190.946960449s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:13 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 25 04:36:13 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 25 04:36:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:36:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:13.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:36:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:14 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 04:36:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 25 04:36:14 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e68 e68: 3 total, 3 up, 3 in
Nov 25 04:36:14 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 68 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:14 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 68 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:14 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 68 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:14 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 68 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:14 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 68 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:14 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 68 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:14 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 68 pg[9.5( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:14 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 68 pg[9.5( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=68) [2]/[1] r=-1 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:36:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:14.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:36:14 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 25 04:36:14 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 25 04:36:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:15 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb00050f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:15 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed40045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e69 e69: 3 total, 3 up, 3 in
Nov 25 04:36:15 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 04:36:15 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 25 04:36:15 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 25 04:36:15 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 25 04:36:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:15 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:36:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:15.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:16 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efedc0c4a70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e70 e70: 3 total, 3 up, 3 in
Nov 25 04:36:16 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 70 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:16 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 70 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:16 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 70 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:16 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 70 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:16 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 70 pg[9.5( v 69'1160 (0'0,69'1160] local-lis/les=0/0 n=6 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 luod=0'0 crt=69'1157 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:16 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 70 pg[9.5( v 69'1160 (0'0,69'1160] local-lis/les=0/0 n=6 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 crt=69'1157 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:16 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 70 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:16 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 70 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:16 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 04:36:16 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 25 04:36:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:16.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:16 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Nov 25 04:36:16 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Nov 25 04:36:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:17 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8001110 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:17 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8001110 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e71 e71: 3 total, 3 up, 3 in
Nov 25 04:36:17 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 71 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:17 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 71 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:17 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 71 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:17 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 71 pg[9.5( v 69'1160 (0'0,69'1160] local-lis/les=70/71 n=6 ec=51/29 lis/c=68/51 les/c/f=69/52/0 sis=70) [2] r=0 lpr=70 pi=[51,70)/1 crt=69'1160 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e72 e72: 3 total, 3 up, 3 in
Nov 25 04:36:17 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Nov 25 04:36:17 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Nov 25 04:36:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:17.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:18 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb8001110 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:18.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e73 e73: 3 total, 3 up, 3 in
Nov 25 04:36:18 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 25 04:36:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:18 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:36:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:18 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:36:18 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 25 04:36:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:19 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efedc03abd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:19 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed40045b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:19 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 25 04:36:19 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 25 04:36:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:19.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:20 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0005290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:20.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:20 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 25 04:36:20 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 25 04:36:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:21 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0005290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:21 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efedc03b4f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:21 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e74 e74: 3 total, 3 up, 3 in
Nov 25 04:36:21 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 04:36:21 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 25 04:36:21 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 25 04:36:21 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 25 04:36:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:21 : epoch 6925783c : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:36:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:21.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:22 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed40045b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:22.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:22 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 04:36:22 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 25 04:36:22 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 25 04:36:22 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 25 04:36:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:23 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0005290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:23 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0005290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:23 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e75 e75: 3 total, 3 up, 3 in
Nov 25 04:36:23 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 04:36:23 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 25 04:36:23 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 25 04:36:23 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 25 04:36:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:23.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:23 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:23 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:24 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efedc03b4f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:24.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:24 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 04:36:24 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 25 04:36:24 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e76 e76: 3 total, 3 up, 3 in
Nov 25 04:36:24 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:24 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:24 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:24 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:24 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 25 04:36:24 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 25 04:36:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:25 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efed40056b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[87150]: 25/11/2025 09:36:25 : epoch 6925783c : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efeb0005290 fd 49 proxy ignored for local
Nov 25 04:36:25 np0005534696 kernel: ganesha.nfsd[87197]: segfault at 50 ip 00007eff6957632e sp 00007eff2f7fd210 error 4 in libntirpc.so.5.8[7eff6955b000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 25 04:36:25 np0005534696 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 04:36:25 np0005534696 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 25 04:36:25 np0005534696 systemd[1]: Started Process Core Dump (PID 91723/UID 0).
Nov 25 04:36:25 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.11 deep-scrub starts
Nov 25 04:36:25 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.11 deep-scrub ok
Nov 25 04:36:25 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 04:36:25 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 25 04:36:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e77 e77: 3 total, 3 up, 3 in
Nov 25 04:36:25 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=8.583978653s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 198.947128296s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:25 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=8.583838463s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.947128296s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:25 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:25 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:25.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:26 np0005534696 systemd-coredump[91724]: Process 87154 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 44:#012#0  0x00007eff6957632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 25 04:36:26 np0005534696 systemd[1]: systemd-coredump@0-91723-0.service: Deactivated successfully.
Nov 25 04:36:26 np0005534696 podman[91732]: 2025-11-25 09:36:26.537867742 +0000 UTC m=+0.025778300 container died 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:36:26 np0005534696 systemd[1]: var-lib-containers-storage-overlay-20b6e28e8840d8a4d9431032e595588af23fc6a3b51316de69b1878e03f31444-merged.mount: Deactivated successfully.
Nov 25 04:36:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:36:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:26.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:36:26 np0005534696 podman[91732]: 2025-11-25 09:36:26.55612686 +0000 UTC m=+0.044037417 container remove 48893ad47177f9d7e251a7e751afe87090be44318170dde0d86dec39f80c1e99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 25 04:36:26 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Main process exited, code=exited, status=139/n/a
Nov 25 04:36:26 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Failed with result 'exit-code'.
Nov 25 04:36:26 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.044s CPU time.
Nov 25 04:36:26 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 25 04:36:26 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 25 04:36:26 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 04:36:26 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 25 04:36:26 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e78 e78: 3 total, 3 up, 3 in
Nov 25 04:36:26 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:26 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:26 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:26 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:26 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:26 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:26 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:26 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:27 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 25 04:36:27 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 25 04:36:27 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e79 e79: 3 total, 3 up, 3 in
Nov 25 04:36:27 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:27 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:27.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093628 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:36:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:28.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:28 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 25 04:36:28 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 25 04:36:28 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e80 e80: 3 total, 3 up, 3 in
Nov 25 04:36:28 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:28 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:28 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:28 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:29 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 25 04:36:29 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 25 04:36:29 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e81 e81: 3 total, 3 up, 3 in
Nov 25 04:36:29 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:29 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:29.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:36:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:30.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:36:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:30 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.4 scrub starts
Nov 25 04:36:30 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.4 scrub ok
Nov 25 04:36:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093631 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:36:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:31 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e82 e82: 3 total, 3 up, 3 in
Nov 25 04:36:31 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 04:36:31 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 25 04:36:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:31.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:31 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 25 04:36:31 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 25 04:36:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:36:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:32.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:36:32 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e83 e83: 3 total, 3 up, 3 in
Nov 25 04:36:32 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.13 scrub starts
Nov 25 04:36:32 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.13 scrub ok
Nov 25 04:36:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:32 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 04:36:32 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 25 04:36:32 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:36:32 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:36:33 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e84 e84: 3 total, 3 up, 3 in
Nov 25 04:36:33 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 25 04:36:33 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 25 04:36:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:33.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 04:36:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 25 04:36:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:36:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:36:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:36:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:36:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:36:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:36:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 04:36:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 25 04:36:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:34.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:34 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 25 04:36:34 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 25 04:36:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:34 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e85 e85: 3 total, 3 up, 3 in
Nov 25 04:36:35 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Nov 25 04:36:35 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Nov 25 04:36:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:35.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 04:36:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 25 04:36:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e86 e86: 3 total, 3 up, 3 in
Nov 25 04:36:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 04:36:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 04:36:36 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 25 04:36:36 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 25 04:36:36 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Scheduled restart job, restart counter is at 1.
Nov 25 04:36:36 np0005534696 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:36:36 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.044s CPU time.
Nov 25 04:36:36 np0005534696 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:36:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:36 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 04:36:36 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 25 04:36:36 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:36:36 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:36:36 np0005534696 podman[91982]: 2025-11-25 09:36:36.935377885 +0000 UTC m=+0.028670564 container create 7f04b6f6f70e07ee24d0d7e2051e62fdbb63ce81d85e541a5387b78eb894c8a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:36:36 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1153f4c9278801ab2b14c0d0a4615b40bced25fa87478e8181caef1e8b8fdcb4/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:36 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1153f4c9278801ab2b14c0d0a4615b40bced25fa87478e8181caef1e8b8fdcb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:36 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1153f4c9278801ab2b14c0d0a4615b40bced25fa87478e8181caef1e8b8fdcb4/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:36 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1153f4c9278801ab2b14c0d0a4615b40bced25fa87478e8181caef1e8b8fdcb4/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.jouchy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:36:36 np0005534696 podman[91982]: 2025-11-25 09:36:36.972772202 +0000 UTC m=+0.066064891 container init 7f04b6f6f70e07ee24d0d7e2051e62fdbb63ce81d85e541a5387b78eb894c8a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:36:36 np0005534696 podman[91982]: 2025-11-25 09:36:36.976185789 +0000 UTC m=+0.069478458 container start 7f04b6f6f70e07ee24d0d7e2051e62fdbb63ce81d85e541a5387b78eb894c8a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 25 04:36:36 np0005534696 bash[91982]: 7f04b6f6f70e07ee24d0d7e2051e62fdbb63ce81d85e541a5387b78eb894c8a3
Nov 25 04:36:36 np0005534696 podman[91982]: 2025-11-25 09:36:36.923270972 +0000 UTC m=+0.016563661 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:36:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:36 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 04:36:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:36 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 04:36:36 np0005534696 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:36:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:37 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 04:36:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:37 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 04:36:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:37 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 04:36:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:37 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 04:36:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:37 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 04:36:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:37 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:36:37 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 25 04:36:37 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 25 04:36:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:37.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:37 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e87 e87: 3 total, 3 up, 3 in
Nov 25 04:36:37 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.579153061s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 active pruub 214.059860229s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:37 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.579126358s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059860229s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:37 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.578181267s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 active pruub 214.059829712s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:37 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.578165054s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059829712s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:37 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 04:36:37 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 25 04:36:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:36:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:38.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:36:38 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 25 04:36:38 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 25 04:36:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:38 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 04:36:38 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 25 04:36:38 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e88 e88: 3 total, 3 up, 3 in
Nov 25 04:36:38 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:38 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:38 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:38 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:39 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 25 04:36:39 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 25 04:36:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:39 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e89 e89: 3 total, 3 up, 3 in
Nov 25 04:36:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 04:36:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 25 04:36:39 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:39 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:40.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:40 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.5 deep-scrub starts
Nov 25 04:36:40 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.5 deep-scrub ok
Nov 25 04:36:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:40 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 04:36:40 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 25 04:36:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e90 e90: 3 total, 3 up, 3 in
Nov 25 04:36:40 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.989828110s) [0] async=[0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 42'1151 active pruub 220.505386353s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:40 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.988760948s) [0] async=[0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 42'1151 active pruub 220.504470825s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:40 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.988720894s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.504470825s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:40 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.989717484s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.505386353s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:41 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 25 04:36:41 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 25 04:36:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:41.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:41 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e91 e91: 3 total, 3 up, 3 in
Nov 25 04:36:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:42.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:42 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.8 deep-scrub starts
Nov 25 04:36:42 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.8 deep-scrub ok
Nov 25 04:36:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:43 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:36:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:43 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:36:43 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 25 04:36:43 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 25 04:36:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:43.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:44.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:44 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.9 deep-scrub starts
Nov 25 04:36:44 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.9 deep-scrub ok
Nov 25 04:36:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:45 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 25 04:36:45 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 25 04:36:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:45.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:46.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:46 np0005534696 python3.9[92221]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:36:46 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 25 04:36:46 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 25 04:36:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:47 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 25 04:36:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:47 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 25 04:36:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:47.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:47 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e92 e92: 3 total, 3 up, 3 in
Nov 25 04:36:47 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466200829s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 active pruub 222.015991211s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:47 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466086388s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.015991211s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:47 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466098785s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 active pruub 222.016296387s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:47 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466066360s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.016296387s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 04:36:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 25 04:36:48 np0005534696 python3.9[92510]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 04:36:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:48.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:48 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.3 scrub starts
Nov 25 04:36:48 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.3 scrub ok
Nov 25 04:36:48 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e93 e93: 3 total, 3 up, 3 in
Nov 25 04:36:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:48 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 04:36:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 04:36:48 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:36:49 np0005534696 python3.9[92662]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:49 np0005534696 python3.9[92831]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:49 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 25 04:36:49 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 25 04:36:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:49.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:49 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e94 e94: 3 total, 3 up, 3 in
Nov 25 04:36:49 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 25 04:36:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 25 04:36:49 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:36:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:50 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0580013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:50 np0005534696 python3.9[92984]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 04:36:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:50.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:50 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 25 04:36:50 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 25 04:36:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e95 e95: 3 total, 3 up, 3 in
Nov 25 04:36:50 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.989589691s) [0] async=[0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 42'1151 active pruub 230.560867310s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:50 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.989502907s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.560867310s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:50 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.987565994s) [0] async=[0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 42'1151 active pruub 230.559646606s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:36:50 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.987524986s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.559646606s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:36:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:51 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058002090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093651 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:36:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:51 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:51 np0005534696 python3.9[93137]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:36:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:51 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 25 04:36:51 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 25 04:36:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:51.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:51 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e96 e96: 3 total, 3 up, 3 in
Nov 25 04:36:52 np0005534696 python3.9[93290]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:36:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:52 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540014c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:52.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:52 np0005534696 python3.9[93368]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:36:52 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e97 e97: 3 total, 3 up, 3 in
Nov 25 04:36:52 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 25 04:36:52 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 25 04:36:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:53 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:53 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058002b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e98 e98: 3 total, 3 up, 3 in
Nov 25 04:36:53 np0005534696 python3.9[93521]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:36:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:53 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 25 04:36:53 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 25 04:36:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:53.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:54 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058002b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:54.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:54 np0005534696 python3.9[93676]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 04:36:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:54 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 25 04:36:54 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 25 04:36:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:55 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058002b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:55 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058002b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:55 np0005534696 python3.9[93830]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 04:36:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:36:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:55.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:55 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 25 04:36:55 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 25 04:36:56 np0005534696 python3.9[93984]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 04:36:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:56 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058002b70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:56.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:56 np0005534696 python3.9[94136]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 04:36:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:56 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 25 04:36:56 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 25 04:36:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:57 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:57 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:57 np0005534696 python3.9[94289]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:36:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:57.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:57 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.1e scrub starts
Nov 25 04:36:57 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.1e scrub ok
Nov 25 04:36:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 25 04:36:58 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e99 e99: 3 total, 3 up, 3 in
Nov 25 04:36:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:58 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:36:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:36:58.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:36:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:58 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Nov 25 04:36:58 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Nov 25 04:36:58 np0005534696 python3.9[94443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:36:59 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 25 04:36:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:59 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058004890 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:36:59 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:36:59 np0005534696 python3.9[94621]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:36:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:36:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:36:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:36:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:36:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:36:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:36:59.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:36:59 np0005534696 python3.9[94699]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:36:59 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 25 04:36:59 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 25 04:37:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 25 04:37:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e100 e100: 3 total, 3 up, 3 in
Nov 25 04:37:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:00 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb054002320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:00 np0005534696 python3.9[94852]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:37:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:00.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:37:00 np0005534696 python3.9[94930]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:37:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:00 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 25 04:37:00 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 25 04:37:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 25 04:37:01 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e101 e101: 3 total, 3 up, 3 in
Nov 25 04:37:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:01 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:01 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:01 np0005534696 python3.9[95083]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:37:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:01.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.1d scrub starts
Nov 25 04:37:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.1d scrub ok
Nov 25 04:37:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e102 e102: 3 total, 3 up, 3 in
Nov 25 04:37:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:02 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:02.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e103 e103: 3 total, 3 up, 3 in
Nov 25 04:37:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:02 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 25 04:37:02 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 25 04:37:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:03 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:03 np0005534696 python3.9[95236]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:37:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:03 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:03 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e104 e104: 3 total, 3 up, 3 in
Nov 25 04:37:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:03.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:03 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 25 04:37:03 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 25 04:37:03 np0005534696 python3.9[95389]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 04:37:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:04 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:04 np0005534696 python3.9[95539]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:37:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:04.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:04 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.2 scrub starts
Nov 25 04:37:04 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.2 scrub ok
Nov 25 04:37:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:05 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:05 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:05 np0005534696 python3.9[95692]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:37:05 np0005534696 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 04:37:05 np0005534696 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 04:37:05 np0005534696 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 04:37:05 np0005534696 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 04:37:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:37:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:05 np0005534696 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 04:37:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:05.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:05 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 25 04:37:05 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 25 04:37:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:06 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:06 np0005534696 python3.9[95854]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 04:37:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:06.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:06 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 25 04:37:06 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 25 04:37:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:07 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:07 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:07.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:07 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 25 04:37:07 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 25 04:37:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 25 04:37:08 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e105 e105: 3 total, 3 up, 3 in
Nov 25 04:37:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:08 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:37:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:08.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:37:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:08 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 25 04:37:08 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 25 04:37:09 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 25 04:37:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:09 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:09 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:09.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:09 np0005534696 python3.9[96009]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:37:09 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 25 04:37:10 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 25 04:37:10 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 25 04:37:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e106 e106: 3 total, 3 up, 3 in
Nov 25 04:37:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:10 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:10 np0005534696 python3.9[96164]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:37:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:37:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:10.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:37:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:37:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:11 np0005534696 systemd-logind[744]: Session 37 logged out. Waiting for processes to exit.
Nov 25 04:37:11 np0005534696 systemd[1]: session-37.scope: Deactivated successfully.
Nov 25 04:37:11 np0005534696 systemd[1]: session-37.scope: Consumed 45.866s CPU time.
Nov 25 04:37:11 np0005534696 systemd-logind[744]: Removed session 37.
Nov 25 04:37:11 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 25 04:37:11 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 25 04:37:11 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 25 04:37:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:11 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:11 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:11.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:12 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.17 scrub starts
Nov 25 04:37:12 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.17 scrub ok
Nov 25 04:37:12 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 25 04:37:12 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e107 e107: 3 total, 3 up, 3 in
Nov 25 04:37:12 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107 pruub=9.334072113s) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 active pruub 246.056503296s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:12 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107 pruub=9.333869934s) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 246.056503296s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:37:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:12 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:12.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:12 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e108 e108: 3 total, 3 up, 3 in
Nov 25 04:37:12 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:12 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 04:37:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:13 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 25 04:37:13 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 25 04:37:13 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 25 04:37:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:13 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:13 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064002060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:13 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e109 e109: 3 total, 3 up, 3 in
Nov 25 04:37:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:13.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:14 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 25 04:37:14 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 25 04:37:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 25 04:37:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 25 04:37:14 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:37:14 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:37:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:14 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064002060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:14.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:15 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 25 04:37:15 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 25 04:37:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e110 e110: 3 total, 3 up, 3 in
Nov 25 04:37:15 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:15 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:37:15 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110 pruub=15.041230202s) [0] async=[0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 42'1151 active pruub 254.780319214s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:15 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110 pruub=15.041177750s) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 254.780319214s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:37:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:15 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064002060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:15 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:37:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:15.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:16 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 25 04:37:16 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 25 04:37:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e111 e111: 3 total, 3 up, 3 in
Nov 25 04:37:16 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 25 04:37:16 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 25 04:37:16 np0005534696 systemd-logind[744]: New session 38 of user zuul.
Nov 25 04:37:16 np0005534696 systemd[1]: Started Session 38 of User zuul.
Nov 25 04:37:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:16 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:16 np0005534696 python3.9[96350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:37:17 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 25 04:37:17 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 25 04:37:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e112 e112: 3 total, 3 up, 3 in
Nov 25 04:37:17 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:17 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:37:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:17 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:17 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064002060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:17.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:17 np0005534696 python3.9[96508]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 04:37:18 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 25 04:37:18 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 25 04:37:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e113 e113: 3 total, 3 up, 3 in
Nov 25 04:37:18 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=112/113 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:37:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:18 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:37:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:18.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:37:18 np0005534696 python3.9[96661]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:37:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:19 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 25 04:37:19 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 25 04:37:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:19 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:19 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:19 np0005534696 python3.9[96771]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 04:37:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:19.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:20 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 25 04:37:20 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 25 04:37:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:20 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb064003810 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:20.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 25 04:37:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:21 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 25 04:37:21 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 25 04:37:21 np0005534696 python3.9[96925]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:37:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:21 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:21 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e114 e114: 3 total, 3 up, 3 in
Nov 25 04:37:21 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 25 04:37:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:21 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:21.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:22 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 25 04:37:22 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 25 04:37:22 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 25 04:37:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:22 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb070003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:22.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:22 np0005534696 python3.9[97081]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 04:37:23 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 25 04:37:23 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 25 04:37:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:23 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:23 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e115 e115: 3 total, 3 up, 3 in
Nov 25 04:37:23 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115 pruub=10.627995491s) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 active pruub 258.434844971s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:23 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115 pruub=10.627876282s) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 258.434844971s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:37:23 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 25 04:37:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:23 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb070003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:23 np0005534696 python3.9[97235]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:37:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:23.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:24 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.7 scrub starts
Nov 25 04:37:24 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.7 scrub ok
Nov 25 04:37:24 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e116 e116: 3 total, 3 up, 3 in
Nov 25 04:37:24 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 25 04:37:24 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:24 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 04:37:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:24 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:24 np0005534696 python3.9[97388]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 04:37:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:24.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:25 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 25 04:37:25 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 25 04:37:25 np0005534696 python3.9[97538]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:37:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:25 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e117 e117: 3 total, 3 up, 3 in
Nov 25 04:37:25 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 25 04:37:25 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:37:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:25 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:37:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:25.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:25 np0005534696 python3.9[97698]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:37:26 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 25 04:37:26 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 25 04:37:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093726 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:37:26 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e118 e118: 3 total, 3 up, 3 in
Nov 25 04:37:26 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118 pruub=14.994240761s) [1] async=[1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 42'1151 active pruub 265.825317383s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:26 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118 pruub=14.994033813s) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 265.825317383s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:37:26 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 25 04:37:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:26 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:37:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:26.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:37:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:27 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 25 04:37:27 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 25 04:37:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:27 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e119 e119: 3 total, 3 up, 3 in
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.231927) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447231946, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3422, "num_deletes": 251, "total_data_size": 7935788, "memory_usage": 8046880, "flush_reason": "Manual Compaction"}
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447241881, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 5110746, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7258, "largest_seqno": 10675, "table_properties": {"data_size": 5094817, "index_size": 10247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4613, "raw_key_size": 42628, "raw_average_key_size": 23, "raw_value_size": 5059393, "raw_average_value_size": 2782, "num_data_blocks": 444, "num_entries": 1818, "num_filter_entries": 1818, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063323, "oldest_key_time": 1764063323, "file_creation_time": 1764063447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 10126 microseconds, and 7011 cpu microseconds.
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.242050) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 5110746 bytes OK
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.242132) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.242953) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.242965) EVENT_LOG_v1 {"time_micros": 1764063447242962, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.242975) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7918747, prev total WAL file size 7918747, number of live WAL files 2.
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.244433) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(4990KB)], [18(12MB)]
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447244484, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 17798690, "oldest_snapshot_seqno": -1}
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3921 keys, 13861275 bytes, temperature: kUnknown
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447277899, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 13861275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13829408, "index_size": 20941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9861, "raw_key_size": 99646, "raw_average_key_size": 25, "raw_value_size": 13751975, "raw_average_value_size": 3507, "num_data_blocks": 906, "num_entries": 3921, "num_filter_entries": 3921, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764063447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.278072) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13861275 bytes
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.278587) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 531.9 rd, 414.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.9, 12.1 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(6.2) write-amplify(2.7) OK, records in: 4451, records dropped: 530 output_compression: NoCompression
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.278603) EVENT_LOG_v1 {"time_micros": 1764063447278595, "job": 8, "event": "compaction_finished", "compaction_time_micros": 33464, "compaction_time_cpu_micros": 22207, "output_level": 6, "num_output_files": 1, "total_output_size": 13861275, "num_input_records": 4451, "num_output_records": 3921, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447279226, "job": 8, "event": "table_file_deletion", "file_number": 20}
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063447280641, "job": 8, "event": "table_file_deletion", "file_number": 18}
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.244382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.280672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.280675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.280677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.280678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:37:27.280679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:37:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:27 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:27 np0005534696 python3.9[97853]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:37:27 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e120 e120: 3 total, 3 up, 3 in
Nov 25 04:37:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:27.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:28 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 25 04:37:28 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 25 04:37:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:28 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:28.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:28 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e121 e121: 3 total, 3 up, 3 in
Nov 25 04:37:28 np0005534696 python3.9[98141]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 04:37:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:28 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 25 04:37:29 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 25 04:37:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:29 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:29 np0005534696 python3.9[98292]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:37:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:29 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:29.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:29 np0005534696 python3.9[98447]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:37:29 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 25 04:37:29 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 25 04:37:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:30 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:30.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:37:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:30 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 25 04:37:31 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 25 04:37:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:31 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:31 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e122 e122: 3 total, 3 up, 3 in
Nov 25 04:37:31 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 25 04:37:31 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122 pruub=14.151810646s) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 active pruub 270.017272949s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:31 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122 pruub=14.151712418s) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 270.017272949s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:37:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:31 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058005700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:31 np0005534696 python3.9[98601]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:37:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:37:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:31.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:37:31 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 25 04:37:32 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 25 04:37:32 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 25 04:37:32 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e123 e123: 3 total, 3 up, 3 in
Nov 25 04:37:32 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:32 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 04:37:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:32 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:37:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:32.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:37:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:33 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 25 04:37:33 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 25 04:37:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:33 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb078002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 25 04:37:33 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e124 e124: 3 total, 3 up, 3 in
Nov 25 04:37:33 np0005534696 python3.9[98756]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:37:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:33 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:33.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:33 np0005534696 python3.9[98911]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 25 04:37:33 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 25 04:37:34 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 25 04:37:34 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:37:34 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 25 04:37:34 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e125 e125: 3 total, 3 up, 3 in
Nov 25 04:37:34 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125 pruub=15.823269844s) [1] async=[1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 42'1151 active pruub 274.714538574s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:34 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125 pruub=15.823235512s) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 274.714538574s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 04:37:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:34 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:34.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:34 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 25 04:37:35 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 25 04:37:35 np0005534696 systemd[1]: session-38.scope: Deactivated successfully.
Nov 25 04:37:35 np0005534696 systemd[1]: session-38.scope: Consumed 13.292s CPU time.
Nov 25 04:37:35 np0005534696 systemd-logind[744]: Session 38 logged out. Waiting for processes to exit.
Nov 25 04:37:35 np0005534696 systemd-logind[744]: Removed session 38.
Nov 25 04:37:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:35 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 25 04:37:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e126 e126: 3 total, 3 up, 3 in
Nov 25 04:37:35 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:37:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:35 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:37:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:35 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:37:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:35.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:35 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 25 04:37:35 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 25 04:37:36 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 25 04:37:36 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e127 e127: 3 total, 3 up, 3 in
Nov 25 04:37:36 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:36 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 04:37:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:36 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:36.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:36 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Nov 25 04:37:36 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Nov 25 04:37:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:37 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:37 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e128 e128: 3 total, 3 up, 3 in
Nov 25 04:37:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:37 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:37:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:37.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:37:38 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e129 e129: 3 total, 3 up, 3 in
Nov 25 04:37:38 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 04:37:38 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 04:37:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:38 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:38 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:37:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:38 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:37:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:38.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:39 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:37:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:37:39 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e130 e130: 3 total, 3 up, 3 in
Nov 25 04:37:39 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=129/130 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 04:37:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:39 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:39.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:39 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 25 04:37:39 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 25 04:37:40 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:37:40 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:37:40 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:37:40 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:37:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:40 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:40 np0005534696 systemd-logind[744]: New session 39 of user zuul.
Nov 25 04:37:40 np0005534696 systemd[1]: Started Session 39 of User zuul.
Nov 25 04:37:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:40.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:37:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:41 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:41 np0005534696 python3.9[99201]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:37:41 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e131 e131: 3 total, 3 up, 3 in
Nov 25 04:37:41 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 25 04:37:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:41 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:37:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:41 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:41.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:42 np0005534696 python3.9[99356]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:37:42 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 25 04:37:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:42 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:37:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:42.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:37:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:43 np0005534696 python3.9[99574]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:37:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:43 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0780045b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:43 np0005534696 systemd[1]: session-39.scope: Deactivated successfully.
Nov 25 04:37:43 np0005534696 systemd[1]: session-39.scope: Consumed 1.685s CPU time.
Nov 25 04:37:43 np0005534696 systemd-logind[744]: Session 39 logged out. Waiting for processes to exit.
Nov 25 04:37:43 np0005534696 systemd-logind[744]: Removed session 39.
Nov 25 04:37:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:43 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:43 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:37:43 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:37:43 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 25 04:37:43 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e132 e132: 3 total, 3 up, 3 in
Nov 25 04:37:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:43.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:44 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:44 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e133 e133: 3 total, 3 up, 3 in
Nov 25 04:37:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:44.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 25 04:37:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:45 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:45 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e134 e134: 3 total, 3 up, 3 in
Nov 25 04:37:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:37:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:45.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:46 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:37:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:46.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:37:46 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e135 e135: 3 total, 3 up, 3 in
Nov 25 04:37:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:47 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0bfc80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:47 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:47 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 e136: 3 total, 3 up, 3 in
Nov 25 04:37:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:37:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:47.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:37:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093748 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:37:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:48 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:48 np0005534696 systemd-logind[744]: New session 40 of user zuul.
Nov 25 04:37:48 np0005534696 systemd[1]: Started Session 40 of User zuul.
Nov 25 04:37:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:48.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:49 np0005534696 python3.9[99761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:37:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540019f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:49.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:50 np0005534696 python3.9[99917]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:37:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:50 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c0af0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:50.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:37:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:50 np0005534696 python3.9[100073]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:37:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:51 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:51 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:51 np0005534696 python3.9[100158]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:37:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:51.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:52 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540019f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:37:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:52.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:37:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:53 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540019f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:53 np0005534696 python3.9[100312]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:37:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:53 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:53.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:54 np0005534696 python3.9[100509]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:37:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:54 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:54.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:54 np0005534696 python3.9[100661]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:37:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:55 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540019f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:55 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540019f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:55 np0005534696 python3.9[100823]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:37:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:37:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:55 np0005534696 python3.9[100901]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:37:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:37:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:55.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:37:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:56 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:56 np0005534696 python3.9[101054]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:37:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:56.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:56 np0005534696 python3.9[101132]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:37:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:57 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:57 np0005534696 python3.9[101285]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:37:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:57 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540019f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:57 np0005534696 python3.9[101437]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:37:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:57.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:58 np0005534696 python3.9[101590]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:37:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:58 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c1410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:37:58.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:37:58 np0005534696 python3.9[101742]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:37:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:59 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:59 np0005534696 python3.9[101920]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:37:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:37:59 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:37:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:37:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:37:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:37:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:37:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:37:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:37:59.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:00 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:00.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093800 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:38:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:01 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c1410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:01 np0005534696 python3.9[102075]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:38:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:01 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:01 np0005534696 python3.9[102229]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:38:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:01.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:02 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058007e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:02 np0005534696 python3.9[102382]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:38:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:38:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:02.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:38:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:03 np0005534696 python3.9[102534]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:38:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:03 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:03 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:03 np0005534696 python3.9[102688]: ansible-service_facts Invoked
Nov 25 04:38:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:03 np0005534696 network[102706]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 04:38:03 np0005534696 network[102707]: 'network-scripts' will be removed from distribution in near future.
Nov 25 04:38:03 np0005534696 network[102708]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 04:38:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:03.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:04 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:04.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:05 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:05 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c2120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:05.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:06 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:06.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:07 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:07 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:07.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093808 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:38:08 np0005534696 python3.9[103164]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:38:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:08 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c2a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:38:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:08.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:38:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:09 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:09 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:38:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:09 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:09.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:10 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:10 np0005534696 python3.9[103319]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 04:38:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:38:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:10.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:38:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:11 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c2a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:11 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c2a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:11 np0005534696 python3.9[103472]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:11.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:12 np0005534696 python3.9[103551]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:12 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:38:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:12 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:38:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:12 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:38:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:12.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:38:12 np0005534696 python3.9[103703]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:13 np0005534696 python3.9[103781]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:13 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:13 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c2a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:13.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:14 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:14 np0005534696 python3.9[103935]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:14.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:14 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:38:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:14 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:38:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:15 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:15 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:15.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:15 np0005534696 python3.9[104088]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:38:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:16 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c2a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:16.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:16 np0005534696 python3.9[104176]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:38:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:17 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:17 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:17 np0005534696 systemd[1]: session-40.scope: Deactivated successfully.
Nov 25 04:38:17 np0005534696 systemd[1]: session-40.scope: Consumed 17.417s CPU time.
Nov 25 04:38:17 np0005534696 systemd-logind[744]: Session 40 logged out. Waiting for processes to exit.
Nov 25 04:38:17 np0005534696 systemd-logind[744]: Removed session 40.
Nov 25 04:38:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:17.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:17 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:38:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:18 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:38:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:18.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:38:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:19 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c2a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:19 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:19.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:20 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb060000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:38:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:38:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:20 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:38:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:21 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:21 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:21.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:22 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:38:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:22.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:38:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:23 np0005534696 systemd-logind[744]: New session 41 of user zuul.
Nov 25 04:38:23 np0005534696 systemd[1]: Started Session 41 of User zuul.
Nov 25 04:38:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:23 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06000a310 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:23 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:23 np0005534696 python3.9[104390]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:23.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:24 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:38:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:24 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:38:24 np0005534696 python3.9[104543]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:24 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:38:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:24.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:38:24 np0005534696 python3.9[104621]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093824 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:38:25 np0005534696 systemd-logind[744]: Session 41 logged out. Waiting for processes to exit.
Nov 25 04:38:25 np0005534696 systemd[1]: session-41.scope: Deactivated successfully.
Nov 25 04:38:25 np0005534696 systemd[1]: session-41.scope: Consumed 1.219s CPU time.
Nov 25 04:38:25 np0005534696 systemd-logind[744]: Removed session 41.
Nov 25 04:38:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:25 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:25 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06000a310 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:25.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:26 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:38:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:26.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:38:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:27 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:38:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:27 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c2a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:27 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb07c0c2a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:27 np0005534696 systemd[1]: session-18.scope: Deactivated successfully.
Nov 25 04:38:27 np0005534696 systemd[1]: session-18.scope: Consumed 6.274s CPU time.
Nov 25 04:38:27 np0005534696 systemd-logind[744]: Session 18 logged out. Waiting for processes to exit.
Nov 25 04:38:27 np0005534696 systemd-logind[744]: Removed session 18.
Nov 25 04:38:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:27.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:28 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06000a310 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:28.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:29 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:29 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:29 np0005534696 systemd-logind[744]: New session 42 of user zuul.
Nov 25 04:38:29 np0005534696 systemd[1]: Started Session 42 of User zuul.
Nov 25 04:38:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:29.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093830 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:38:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:30 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:30 np0005534696 python3.9[104805]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:38:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:30.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:31 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:31 np0005534696 python3.9[104962]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:31 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06000a4b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:31.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:32 np0005534696 python3.9[105138]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:32 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:32 np0005534696 python3.9[105216]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.lp1c51m5 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:32.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:33 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:33 np0005534696 python3.9[105369]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:33 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:33 np0005534696 python3.9[105447]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.bdyx25cu recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:33.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:34 np0005534696 python3.9[105600]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:38:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:34 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06000a4b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:38:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:34.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:38:34 np0005534696 python3.9[105752]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:35 np0005534696 python3.9[105830]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:38:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:35 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:35 np0005534696 python3.9[105983]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:35 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:35 np0005534696 python3.9[106061]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:38:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:35.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:36 np0005534696 python3.9[106214]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:36 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:36.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:36 np0005534696 python3.9[106366]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:37 np0005534696 python3.9[106444]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:37 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:37 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:37 np0005534696 python3.9[106597]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:37 np0005534696 python3.9[106675]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:37.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:38 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:38.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:38 np0005534696 python3.9[106828]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:38:38 np0005534696 systemd[1]: Reloading.
Nov 25 04:38:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:38 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:38:38 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:38:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:39 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:39 np0005534696 python3.9[107044]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:39 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:39 np0005534696 python3.9[107122]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:39.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:40 np0005534696 python3.9[107275]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:40 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:40 np0005534696 python3.9[107353]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:40.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:41 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:41 np0005534696 python3.9[107505]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:38:41 np0005534696 systemd[1]: Reloading.
Nov 25 04:38:41 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:38:41 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:38:41 np0005534696 systemd[1]: Starting Create netns directory...
Nov 25 04:38:41 np0005534696 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 04:38:41 np0005534696 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 04:38:41 np0005534696 systemd[1]: Finished Create netns directory.
Nov 25 04:38:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:41 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06000a4b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:41.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:42 np0005534696 python3.9[107700]: ansible-ansible.builtin.service_facts Invoked
Nov 25 04:38:42 np0005534696 network[107717]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 04:38:42 np0005534696 network[107718]: 'network-scripts' will be removed from distribution in near future.
Nov 25 04:38:42 np0005534696 network[107719]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 04:38:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:42 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:38:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:42.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:38:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:43 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:43 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:43.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:44 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb06000a4b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:44.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:45 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:38:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:38:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:45 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:45 np0005534696 python3.9[108064]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:45.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:46 np0005534696 python3.9[108142]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:38:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:38:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:38:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:38:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:46 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:46.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:46 np0005534696 python3.9[108294]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:47 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:47 np0005534696 python3.9[108447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:47 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:47 np0005534696 python3.9[108525]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:47.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:48 np0005534696 python3.9[108679]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 04:38:48 np0005534696 systemd[1]: Starting Time & Date Service...
Nov 25 04:38:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:48 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:48 np0005534696 systemd[1]: Started Time & Date Service.
Nov 25 04:38:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:38:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:48.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:38:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:49 np0005534696 python3.9[108860]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb090000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:38:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:38:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:49 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:49 np0005534696 python3.9[109013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:49 np0005534696 python3.9[109092]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:49.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:50 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:50 np0005534696 python3.9[109244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:38:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:50.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:38:50 np0005534696 python3.9[109322]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.sm0v2ozz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:51 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:51 np0005534696 python3.9[109475]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:51 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:51 np0005534696 python3.9[109553]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:51.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:52 np0005534696 python3.9[109706]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:38:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:52 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:52.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:53 np0005534696 python3[109859]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 04:38:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:53 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:53 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:53 np0005534696 python3.9[110012]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:53 np0005534696 python3.9[110091]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:53.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:54 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:54 np0005534696 python3.9[110243]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:38:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:54.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:38:54 np0005534696 python3.9[110321]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:55 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0900019c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:55 np0005534696 python3.9[110474]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:55 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:55 np0005534696 python3.9[110552]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:38:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:38:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:55.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:38:56 np0005534696 python3.9[110705]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:56 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:56 np0005534696 python3.9[110783]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:38:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:56.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:38:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:57 np0005534696 python3.9[110935]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:38:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:57 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:57 np0005534696 python3.9[111016]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:57 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:38:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:57.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:38:58 np0005534696 python3.9[111169]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:38:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:58 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb058008380 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:58 np0005534696 python3.9[111324]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:38:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:38:58.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:38:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:59 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:38:59 np0005534696 python3.9[111477]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:59 np0005534696 kernel: ganesha.nfsd[99603]: segfault at 50 ip 00007fb10bb8b32e sp 00007fb0bfffe210 error 4 in libntirpc.so.5.8[7fb10bb70000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 25 04:38:59 np0005534696 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 04:38:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[91993]: 25/11/2025 09:38:59 : epoch 692578a4 : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb0540043a0 fd 48 proxy ignored for local
Nov 25 04:38:59 np0005534696 systemd[1]: Started Process Core Dump (PID 111633/UID 0).
Nov 25 04:38:59 np0005534696 python3.9[111656]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:38:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:38:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:38:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:38:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:38:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:38:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:38:59.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:39:00 np0005534696 python3.9[111809]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 04:39:00 np0005534696 systemd-coredump[111647]: Process 91997 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 58:#012#0  0x00007fb10bb8b32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 25 04:39:00 np0005534696 systemd[1]: systemd-coredump@1-111633-0.service: Deactivated successfully.
Nov 25 04:39:00 np0005534696 podman[111882]: 2025-11-25 09:39:00.520284944 +0000 UTC m=+0.018281117 container died 7f04b6f6f70e07ee24d0d7e2051e62fdbb63ce81d85e541a5387b78eb894c8a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 25 04:39:00 np0005534696 systemd[1]: var-lib-containers-storage-overlay-1153f4c9278801ab2b14c0d0a4615b40bced25fa87478e8181caef1e8b8fdcb4-merged.mount: Deactivated successfully.
Nov 25 04:39:00 np0005534696 systemd[82992]: Created slice User Background Tasks Slice.
Nov 25 04:39:00 np0005534696 systemd[82992]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 04:39:00 np0005534696 podman[111882]: 2025-11-25 09:39:00.540161507 +0000 UTC m=+0.038157670 container remove 7f04b6f6f70e07ee24d0d7e2051e62fdbb63ce81d85e541a5387b78eb894c8a3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:39:00 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Main process exited, code=exited, status=139/n/a
Nov 25 04:39:00 np0005534696 systemd[82992]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 04:39:00 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Failed with result 'exit-code'.
Nov 25 04:39:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:00.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:00 np0005534696 python3.9[112000]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 04:39:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:01 np0005534696 systemd-logind[744]: Session 42 logged out. Waiting for processes to exit.
Nov 25 04:39:01 np0005534696 systemd[1]: session-42.scope: Deactivated successfully.
Nov 25 04:39:01 np0005534696 systemd[1]: session-42.scope: Consumed 20.286s CPU time.
Nov 25 04:39:01 np0005534696 systemd-logind[744]: Removed session 42.
Nov 25 04:39:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:01.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:39:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:02.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:39:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:03.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:04.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093905 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:39:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:05.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:06 np0005534696 systemd-logind[744]: New session 43 of user zuul.
Nov 25 04:39:06 np0005534696 systemd[1]: Started Session 43 of User zuul.
Nov 25 04:39:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:06.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:06 np0005534696 python3.9[112186]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 04:39:07 np0005534696 python3.9[112339]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:39:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:07.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:08 np0005534696 python3.9[112494]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 25 04:39:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:39:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:08.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:39:08 np0005534696 python3.9[112646]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.8b1xjy6e follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:09 np0005534696 python3.9[112772]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.8b1xjy6e mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063548.4329972-104-42726642714520/.source.8b1xjy6e _original_basename=.rn11dhx8 follow=False checksum=719236507bdcc56ceb2be3ce1ef5008b5cfc2235 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:10 np0005534696 python3.9[112925]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:39:10 np0005534696 python3.9[113077]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/QqShzRf5Fxs30q3tSf7IhrByfRVQwrs4CVW/gcd2Sdcp7tmVXVNFpJc8XlgTmWxcSLbFtAv0HgJOJ3p6/+g394nChAIaM55uhK/RLFqBZ/byiFqEjvN2LkEWuUVdvbZM808GhONJnWQtg70nn99jeLP34zkSD7gsU7cykxF7K7VyeBfeSiuOcyTjXvVfXr9TZxCZMrsb4eWFZAZ4QERXITlLcZthwc0kd17QWJWLo8Ssv4Qu0DtCHtqHO07s7Nz/CpSs0TX5jVM+C+2rAMn+aAZ4J25X8di4ABF5tO27d+ePazRlU5PWjb8n6kdy1B/cjHgvajXOoUPb5RjyVx2IgULBXaWsIRO23wp8YqiE1OdTly2+Nr5KiTPvR5yqq9C6aBNzS7YyUQc6Rf2RBAaLQbA36NJLGvPUWC7iYVtWdGoTfcTmzqkD2s3hzZl+zU2xNS0IpwByJsOJVIijtGFh1Y45uujq0WUJNPf1ayrY2Z/TV+iO/1iah3JArjyNiq8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPD1sScOy6Aiq5PZkl3KepHqJnvlMIZW4R0DzMl4b3w#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO/iVb5vehoW1eqrk4jdR3j25kacpoWkaPIq4PHAndTN4lXAEwSRab7iUqXkAAaYvUnrCJ86WUoAYGkII0QB5wA=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSE1VMIuB9MiQ17/QHDRAbfwrBNbTb+wZH1rCqeQvAxcHqZYp6TugJnyWX+nah5oDk8vz2PCIUW2lm/tVgP4Y2JHeaN2uMNgVnz1WtD6lCQORMYi1R+KpBgiAQoZAjAyC5Ugx5LWbDvrwtpt0zi2DEgCr2Zao5DG5UAaIcs7/Rj2LRx3hgA4jJ9xJKHVi5bUZfjIlWxLzVXVYT+dvUNrZoiVMBcaUMZRpU4tJ/76mE2jbqsfHEPFwHZ6ljoIegFbzNYoKYMCPK+DeOs/73xD4r/nzeQOK3IQzMOEEVaUYvceA+EPX4M+MrKfkNrJwf35qTOFJpb368gJsebA9uXjzPfzX/uh1atxLv5SihEzC5fHdiZ3BZ3wLEy0C7lvXyRBZdQx+anEYQnDepM/ThOT4YR2BNSCdRS2OpzeSJDS+o5CS++zCqWM4yI3lufZm8O8JqPEblV518196TSyMlAOzPbjEjrUaYGdljY5S2OzKA4PBJW4hW4RyBtjcZWJBpNlM=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBoG9NSSqw98oHfgpW8u+wJYHDhMiOjIhpCElLIROYdO#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHFL1noqwoCl3YzxWiRl0GcsDxYERT1o8e2TvLqUkxWuv8xj0oHuq7+GhcKu7HpiCls71ko7MDcOX4zteG544k4=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBYH+LEkGk38QCoX+uCPb3zHk7+XCeEWV22HpalqUrYF70U5Myra5/E2/v2kioqGNh5TR9q+A7kNO0JU78Ai+6UBv5aJlbEptu33E5t38qiAv3rpyypYwQ8PdWBl7OCeDcqz0EyYAZEw7rLbCWimqRhYsSXuUND+rRboiuI8DEX229oAgnRmIjyPJTTdKGiM3FTdl9YiSbYNyBykzJ8AugCfme4+hmds+8LJloh2aJjRJCs3/GvxdaGJcjBWAqN3Aurg+gPekKe4fwmOir2+KpqBDQE9YMfiBvraaCMGrDXkAjPdsycsvGMsWckhOgEW5qpTIt+ca5kcrK43ChAH5R/PpHlHnEYqw2o26BLmqIejfmXKRSxmH/Fq9Ldj3DMLJr4NTFBfJAl8wqsUKs6/0jngwOCYz6NLs7GgGZLMYv6wbRVgUpCc4ikQ8f1EDmXTdtqxef+QdmLTgWY1qCqe5lL8BcDDCjOTLJ6bbLUAdubY1z4vb6SFVcamH4SkSCFxs=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGHCQQOw3EbtZ2XAFA2gGrEnb7MaEAFwIJjyskket7pD#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFP8ctNKDLqIcODtgMol02WD/NgFM5ja/WeN20e07JH/Mz/Ge/v2/ybsY8LOtiyzixlX47XT8hWBR4IBwS2uvfM=#012 create=True mode=0644 path=/tmp/ansible.8b1xjy6e state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:39:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:10.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:39:10 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Scheduled restart job, restart counter is at 2.
Nov 25 04:39:10 np0005534696 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:39:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:10 np0005534696 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:39:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:10 np0005534696 podman[113140]: 2025-11-25 09:39:10.922061577 +0000 UTC m=+0.028283445 container create 3abdc6584e89fa8649b6b4dd46b5a9f53468803b5d629e649cbca3281d88567a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 04:39:10 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b563f959fae5753f6d9fbb22cbd198eb39399f86b056e73986eb85af34aa7432/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 04:39:10 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b563f959fae5753f6d9fbb22cbd198eb39399f86b056e73986eb85af34aa7432/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:39:10 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b563f959fae5753f6d9fbb22cbd198eb39399f86b056e73986eb85af34aa7432/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:39:10 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b563f959fae5753f6d9fbb22cbd198eb39399f86b056e73986eb85af34aa7432/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.jouchy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:39:10 np0005534696 podman[113140]: 2025-11-25 09:39:10.967190221 +0000 UTC m=+0.073412099 container init 3abdc6584e89fa8649b6b4dd46b5a9f53468803b5d629e649cbca3281d88567a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 25 04:39:10 np0005534696 podman[113140]: 2025-11-25 09:39:10.971683828 +0000 UTC m=+0.077905686 container start 3abdc6584e89fa8649b6b4dd46b5a9f53468803b5d629e649cbca3281d88567a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 25 04:39:10 np0005534696 bash[113140]: 3abdc6584e89fa8649b6b4dd46b5a9f53468803b5d629e649cbca3281d88567a
Nov 25 04:39:10 np0005534696 podman[113140]: 2025-11-25 09:39:10.910319322 +0000 UTC m=+0.016541200 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:39:10 np0005534696 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:39:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:10 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 04:39:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:10 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 04:39:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:11 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 04:39:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:11 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 04:39:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:11 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 04:39:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:11 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 04:39:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:11 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 04:39:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:11 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:39:11 np0005534696 python3.9[113323]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.8b1xjy6e' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:39:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:11.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:12 np0005534696 python3.9[113478]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.8b1xjy6e state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:12 np0005534696 systemd[1]: session-43.scope: Deactivated successfully.
Nov 25 04:39:12 np0005534696 systemd[1]: session-43.scope: Consumed 3.511s CPU time.
Nov 25 04:39:12 np0005534696 systemd-logind[744]: Session 43 logged out. Waiting for processes to exit.
Nov 25 04:39:12 np0005534696 systemd-logind[744]: Removed session 43.
Nov 25 04:39:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:12.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:39:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:13.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:39:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:39:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:14.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:39:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:15.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:16.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:17 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:39:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:17 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:39:17 np0005534696 systemd-logind[744]: New session 44 of user zuul.
Nov 25 04:39:17 np0005534696 systemd[1]: Started Session 44 of User zuul.
Nov 25 04:39:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:17.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:18 np0005534696 python3.9[113663]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:39:18 np0005534696 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 04:39:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:18.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:19 np0005534696 python3.9[113821]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 04:39:19 np0005534696 python3.9[114001]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:39:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:19.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:20 np0005534696 python3.9[114155]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:39:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:20.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:21 np0005534696 python3.9[114309]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:39:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:21.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:22 np0005534696 python3.9[114462]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:22 np0005534696 systemd[1]: session-44.scope: Deactivated successfully.
Nov 25 04:39:22 np0005534696 systemd[1]: session-44.scope: Consumed 2.755s CPU time.
Nov 25 04:39:22 np0005534696 systemd-logind[744]: Session 44 logged out. Waiting for processes to exit.
Nov 25 04:39:22 np0005534696 systemd-logind[744]: Removed session 44.
Nov 25 04:39:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:22.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82f0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:23 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e4001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:23.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:24 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e4002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:24.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:25 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82ec001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093925 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:39:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:25 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82ec001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:25.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:26 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82ec001d50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:26.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:27 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e4003000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:27 np0005534696 systemd-logind[744]: New session 45 of user zuul.
Nov 25 04:39:27 np0005534696 systemd[1]: Started Session 45 of User zuul.
Nov 25 04:39:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:27 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82f0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:27.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:28 np0005534696 python3.9[114662]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:39:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:28 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e4003000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:39:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:28.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:39:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:28 np0005534696 python3.9[114818]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:39:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:29 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82ec003440 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:29 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e4003000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:29 np0005534696 python3.9[114903]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 04:39:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:29.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:30 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82f00021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:30.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:31 np0005534696 python3.9[115056]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:39:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:31 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e4003000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:31 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82ec003440 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:31.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:32 np0005534696 python3.9[115208]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 04:39:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:32 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e40044f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:32 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:39:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:32.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:32 np0005534696 python3.9[115359]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:39:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:33 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82f00021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:33 np0005534696 python3.9[115510]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:39:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:33 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e40044f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:33 np0005534696 systemd[1]: session-45.scope: Deactivated successfully.
Nov 25 04:39:33 np0005534696 systemd[1]: session-45.scope: Consumed 4.224s CPU time.
Nov 25 04:39:33 np0005534696 systemd-logind[744]: Session 45 logged out. Waiting for processes to exit.
Nov 25 04:39:33 np0005534696 systemd-logind[744]: Removed session 45.
Nov 25 04:39:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:33.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:34 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82ec0042b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:34.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:35 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e40044f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:35 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82f00021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:39:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:36.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:39:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:36 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e40044f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:39:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:36.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:39:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:37 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82ec0042b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:37 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e40059e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:38.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:38 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82f00095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:38 np0005534696 systemd-logind[744]: New session 46 of user zuul.
Nov 25 04:39:38 np0005534696 systemd[1]: Started Session 46 of User zuul.
Nov 25 04:39:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:38.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:39 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e40059e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:39 np0005534696 python3.9[115694]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:39:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:39 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82ec0042b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:40.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:40 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e40059e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:39:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:40.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:39:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:40 np0005534696 python3.9[115876]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:39:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:41 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82f00095a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:41 np0005534696 python3.9[116029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:39:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:41 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e40059e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:42 np0005534696 python3.9[116182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:42.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:42 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82ec0053b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:42 np0005534696 python3.9[116305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063581.5725944-156-216990007182919/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=c20123c838c43181b0f7b435c9e827b8d4b20e06 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:42.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:42 np0005534696 python3.9[116457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:43 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82e4006ed0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:39:43 np0005534696 python3.9[116581]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063582.660316-156-71690643136077/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=d372d1b4272dc98810d1b396448f10f5be8f829f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:43 np0005534696 kernel: ganesha.nfsd[114491]: segfault at 50 ip 00007f839ecf132e sp 00007f836dffa210 error 4 in libntirpc.so.5.8[7f839ecd6000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 25 04:39:43 np0005534696 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 04:39:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[113171]: 25/11/2025 09:39:43 : epoch 6925793e : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f82f000a2b0 fd 37 proxy ignored for local
Nov 25 04:39:43 np0005534696 systemd[1]: Started Process Core Dump (PID 116606/UID 0).
Nov 25 04:39:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:44.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:44 np0005534696 python3.9[116736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:44 np0005534696 python3.9[116859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063583.677972-156-96615816464056/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=44cee775805d7613580396ac761dd6cd01ce4002 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:44 np0005534696 systemd-coredump[116607]: Process 113181 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f839ecf132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 25 04:39:44 np0005534696 systemd[1]: systemd-coredump@2-116606-0.service: Deactivated successfully.
Nov 25 04:39:44 np0005534696 systemd[1]: systemd-coredump@2-116606-0.service: Consumed 1.026s CPU time.
Nov 25 04:39:44 np0005534696 podman[116911]: 2025-11-25 09:39:44.691778001 +0000 UTC m=+0.018771300 container died 3abdc6584e89fa8649b6b4dd46b5a9f53468803b5d629e649cbca3281d88567a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 25 04:39:44 np0005534696 systemd[1]: var-lib-containers-storage-overlay-b563f959fae5753f6d9fbb22cbd198eb39399f86b056e73986eb85af34aa7432-merged.mount: Deactivated successfully.
Nov 25 04:39:44 np0005534696 podman[116911]: 2025-11-25 09:39:44.70943888 +0000 UTC m=+0.036432159 container remove 3abdc6584e89fa8649b6b4dd46b5a9f53468803b5d629e649cbca3281d88567a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:39:44 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Main process exited, code=exited, status=139/n/a
Nov 25 04:39:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:44.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:44 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Failed with result 'exit-code'.
Nov 25 04:39:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:45 np0005534696 python3.9[117048]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:39:45 np0005534696 python3.9[117201]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:39:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:45 np0005534696 python3.9[117354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:46.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:46 np0005534696 python3.9[117477]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063585.6408713-333-5464696256416/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=62353e223b5223a7e66bff1340b5faa362947daf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:39:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:46.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:39:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:46 np0005534696 python3.9[117629]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:47 np0005534696 python3.9[117753]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063586.6246462-333-248934982349194/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=6089737efa6d9cfbc115be5d2d9f479510a3f2d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:47 np0005534696 python3.9[117905]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:48.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:48 np0005534696 python3.9[118029]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063587.477733-333-277880237537642/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=fc2094ce8ee2ead6a378fe980ddc1b405153704c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:48 np0005534696 python3.9[118181]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:39:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:48.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:49 np0005534696 python3.9[118407]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:39:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/093949 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:39:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:39:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:39:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:39:49 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:39:49 np0005534696 python3.9[118565]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:50.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:50 np0005534696 python3.9[118689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063589.329463-511-185364837106167/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=0673bb4b8af3c24b97a9e78d14a4db69e8c2312b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:50 np0005534696 python3.9[118841]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:39:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:50.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:39:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:50 np0005534696 python3.9[118964]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063590.1884537-511-166768588285008/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=6089737efa6d9cfbc115be5d2d9f479510a3f2d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:51 np0005534696 python3.9[119117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:51 np0005534696 python3.9[119240]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063591.0442343-511-112256824508484/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=61a936dfe6856fbe4e5fe2fdda7ee222c902eb8a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:52.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:52 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:39:52 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:39:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:52.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:52 np0005534696 python3.9[119418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:39:53 np0005534696 python3.9[119571]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:53 np0005534696 python3.9[119694]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063593.123083-706-6742186988008/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:54.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:54 np0005534696 python3.9[119847]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:39:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:54.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:54 np0005534696 python3.9[119999]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:55 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Scheduled restart job, restart counter is at 3.
Nov 25 04:39:55 np0005534696 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:39:55 np0005534696 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:39:55 np0005534696 podman[120162]: 2025-11-25 09:39:55.180774861 +0000 UTC m=+0.028081701 container create dcb3c585579a96cca5525c4a3ea45c8cb312183165c797862c3dac105714cbb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:39:55 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590403ef31678cae08a1e6a193d0c986c749d97cdea235ce0c37004648525303/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 04:39:55 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590403ef31678cae08a1e6a193d0c986c749d97cdea235ce0c37004648525303/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:39:55 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590403ef31678cae08a1e6a193d0c986c749d97cdea235ce0c37004648525303/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:39:55 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/590403ef31678cae08a1e6a193d0c986c749d97cdea235ce0c37004648525303/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.jouchy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:39:55 np0005534696 podman[120162]: 2025-11-25 09:39:55.220216383 +0000 UTC m=+0.067523244 container init dcb3c585579a96cca5525c4a3ea45c8cb312183165c797862c3dac105714cbb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 04:39:55 np0005534696 podman[120162]: 2025-11-25 09:39:55.22696275 +0000 UTC m=+0.074269592 container start dcb3c585579a96cca5525c4a3ea45c8cb312183165c797862c3dac105714cbb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 25 04:39:55 np0005534696 bash[120162]: dcb3c585579a96cca5525c4a3ea45c8cb312183165c797862c3dac105714cbb7
Nov 25 04:39:55 np0005534696 podman[120162]: 2025-11-25 09:39:55.168859561 +0000 UTC m=+0.016166422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:39:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:39:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 04:39:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:39:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 04:39:55 np0005534696 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:39:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:39:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 04:39:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:39:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 04:39:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:39:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 04:39:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:39:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 04:39:55 np0005534696 python3.9[120152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063594.5350997-777-88568966651329/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:39:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 04:39:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:39:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:39:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:39:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:55 np0005534696 python3.9[120368]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:39:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:56.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:56 np0005534696 python3.9[120520]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:56.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:56 np0005534696 python3.9[120643]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063596.1207323-850-256515105683046/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:57 np0005534696 python3.9[120796]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:39:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:57 np0005534696 python3.9[120949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:39:58.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:58 np0005534696 python3.9[121072]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063597.5794523-924-30049908395938/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:39:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:39:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:39:58.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:39:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:58 np0005534696 python3.9[121224]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:39:59 np0005534696 python3.9[121377]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:39:59 np0005534696 python3.9[121525]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063599.01691-992-168853048218718/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:39:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:39:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:39:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:39:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:00.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:00 np0005534696 python3.9[121678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:40:00 np0005534696 ceph-mon[75508]: overall HEALTH_OK
Nov 25 04:40:00 np0005534696 python3.9[121830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:00.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:01 np0005534696 python3.9[121953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063600.4301765-1065-71535695566354/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=8c34f7d7181e3a288302d8967ba287f15a2c8402 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:01 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:40:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:01 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:40:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:02.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:02 np0005534696 systemd-logind[744]: Session 46 logged out. Waiting for processes to exit.
Nov 25 04:40:02 np0005534696 systemd[1]: session-46.scope: Deactivated successfully.
Nov 25 04:40:02 np0005534696 systemd[1]: session-46.scope: Consumed 15.568s CPU time.
Nov 25 04:40:02 np0005534696 systemd-logind[744]: Removed session 46.
Nov 25 04:40:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:02.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:04.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:04.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:06.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:06.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf20000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:08.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:08 np0005534696 systemd-logind[744]: New session 47 of user zuul.
Nov 25 04:40:08 np0005534696 systemd[1]: Started Session 47 of User zuul.
Nov 25 04:40:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:08 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:08.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:09 np0005534696 python3.9[122157]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:09 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf14001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094009 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:40:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:09 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf14001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:09 np0005534696 python3.9[122310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:10.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:10 np0005534696 python3.9[122434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063609.213824-64-174607650141221/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=366a48c0bc0104e6b502b94bc86d9db21512d98a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:10 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf14001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:10 np0005534696 python3.9[122586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:10.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:10 np0005534696 python3.9[122709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063610.2981873-64-249403062025453/.source.conf _original_basename=ceph.conf follow=False checksum=a12b603cb850b5616045745d010769596d2b9016 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:11 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c0026e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:11 np0005534696 systemd[1]: session-47.scope: Deactivated successfully.
Nov 25 04:40:11 np0005534696 systemd[1]: session-47.scope: Consumed 1.782s CPU time.
Nov 25 04:40:11 np0005534696 systemd-logind[744]: Session 47 logged out. Waiting for processes to exit.
Nov 25 04:40:11 np0005534696 systemd-logind[744]: Removed session 47.
Nov 25 04:40:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:11 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf100034a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:12.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:12 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf100034a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:12.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:13 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c003180 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:13 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c003180 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:40:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:14.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:40:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:14 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c003180 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:14.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:15 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf20001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:15 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c004280 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:16.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:16 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf100041b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:16.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:17 np0005534696 systemd-logind[744]: New session 48 of user zuul.
Nov 25 04:40:17 np0005534696 systemd[1]: Started Session 48 of User zuul.
Nov 25 04:40:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:17 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf100041b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:17 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf20001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:17 np0005534696 python3.9[122894]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:40:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:18.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:18 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c004280 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:18.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:18 np0005534696 python3.9[123051]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:40:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:19 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c004280 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:19 np0005534696 python3.9[123204]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:40:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:19 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf100041b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:19 np0005534696 python3.9[123379]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:40:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:20.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:20 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf20008dc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:20 np0005534696 python3.9[123532]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 04:40:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:20.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:21 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c005380 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:21 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c005380 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:22 np0005534696 dbus-broker-launch[731]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 25 04:40:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:22.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:22 np0005534696 python3.9[123690]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:40:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:22 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf100052b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:22.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:22 np0005534696 python3.9[123774]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:40:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:23 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf20008dc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:23 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c005380 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:24.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:24 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c005380 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:24.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:24 np0005534696 python3.9[123929]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 04:40:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:25 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf100052b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:25 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf20009ad0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:25 np0005534696 python3[124085]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 25 04:40:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:26.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:26 np0005534696 python3.9[124238]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:26 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c005380 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:26 np0005534696 python3.9[124390]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 04:40:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:26.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 04:40:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:27 np0005534696 python3.9[124468]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:27 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c005380 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:27 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf100052b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:27 np0005534696 python3.9[124621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:27 np0005534696 python3.9[124700]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.mc1iw51s recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:28.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:28 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf100052b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:28 np0005534696 python3.9[124852]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:28.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:28 np0005534696 python3.9[124930]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:29 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c006e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:29 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c006e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:29 np0005534696 python3.9[125083]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:40:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 04:40:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:30.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 04:40:30 np0005534696 python3[125237]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 04:40:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:30 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf20009ad0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:30.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:30 np0005534696 python3.9[125389]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:31 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10005fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:31 np0005534696 python3.9[125515]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063630.541739-434-67324885194420/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:31 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c006e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:31 np0005534696 python3.9[125668]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:32.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:32 np0005534696 python3.9[125793]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063631.5921311-478-257174204895514/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:32 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c006e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:32.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:33 np0005534696 python3.9[125945]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:33 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:33 np0005534696 python3.9[126071]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063632.7103636-524-180821878963960/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:33 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10005fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:34 np0005534696 python3.9[126224]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:34.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:34 np0005534696 python3.9[126349]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063633.6917894-569-279418571208527/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:34 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c006e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 04:40:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:34.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 04:40:35 np0005534696 python3.9[126501]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:35 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c006e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:35 np0005534696 python3.9[126627]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063634.6478124-613-163425793656944/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:35 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:36.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:36 np0005534696 python3.9[126780]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:36 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10005fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:36 np0005534696 python3.9[126932]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:40:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:37 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c006e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:37 np0005534696 python3.9[127088]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:37 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c006e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:37 np0005534696 python3.9[127241]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:40:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:38.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:38 np0005534696 python3.9[127394]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:40:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:38 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:38.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094039 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:40:39 np0005534696 python3.9[127548]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:40:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:39 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10005fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:39 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c006e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:39 np0005534696 python3.9[127705]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 04:40:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:40.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 04:40:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:40 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf34003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:40 np0005534696 python3.9[127881]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:40:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:40.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:41 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:41 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10005fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:41 np0005534696 python3.9[128035]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:40:41 np0005534696 ovs-vsctl[128036]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 25 04:40:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:42.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:42 np0005534696 python3.9[128189]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:40:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:42 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c006e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:42.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:42 np0005534696 python3.9[128344]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:40:42 np0005534696 ovs-vsctl[128345]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 25 04:40:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:43 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c006e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:43 np0005534696 python3.9[128496]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:40:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:43 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:44 np0005534696 python3.9[128651]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:40:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:44.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:44 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10005fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:44 np0005534696 python3.9[128803]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:40:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:44.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:40:44 np0005534696 python3.9[128881]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:40:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:45 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10005fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:45 np0005534696 python3.9[129036]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:45 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10005fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:45 np0005534696 python3.9[129114]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:40:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:46.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:46 np0005534696 python3.9[129267]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:46 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:46.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:46 np0005534696 python3.9[129419]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:47 np0005534696 python3.9[129498]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:47 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10005fc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:47 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:40:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:47 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf34004360 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:47 np0005534696 python3.9[129650]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.748226) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647748251, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2471, "num_deletes": 251, "total_data_size": 6383072, "memory_usage": 6473800, "flush_reason": "Manual Compaction"}
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647753994, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2610477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10680, "largest_seqno": 13146, "table_properties": {"data_size": 2602695, "index_size": 4276, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20252, "raw_average_key_size": 21, "raw_value_size": 2585438, "raw_average_value_size": 2698, "num_data_blocks": 187, "num_entries": 958, "num_filter_entries": 958, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063447, "oldest_key_time": 1764063447, "file_creation_time": 1764063647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 5794 microseconds, and 4149 cpu microseconds.
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.754019) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2610477 bytes OK
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.754030) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.754405) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.754416) EVENT_LOG_v1 {"time_micros": 1764063647754413, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.754434) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6371992, prev total WAL file size 6371992, number of live WAL files 2.
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.755306) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2549KB)], [21(13MB)]
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647755341, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16471752, "oldest_snapshot_seqno": -1}
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4453 keys, 14453662 bytes, temperature: kUnknown
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647786005, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14453662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14419261, "index_size": 22195, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 111776, "raw_average_key_size": 25, "raw_value_size": 14333435, "raw_average_value_size": 3218, "num_data_blocks": 957, "num_entries": 4453, "num_filter_entries": 4453, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764063647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.786204) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14453662 bytes
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.787941) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 535.8 rd, 470.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 13.2 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(11.8) write-amplify(5.5) OK, records in: 4879, records dropped: 426 output_compression: NoCompression
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.787957) EVENT_LOG_v1 {"time_micros": 1764063647787949, "job": 10, "event": "compaction_finished", "compaction_time_micros": 30741, "compaction_time_cpu_micros": 21725, "output_level": 6, "num_output_files": 1, "total_output_size": 14453662, "num_input_records": 4879, "num_output_records": 4453, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647788449, "job": 10, "event": "table_file_deletion", "file_number": 23}
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063647790347, "job": 10, "event": "table_file_deletion", "file_number": 21}
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.755257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.790416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.790429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.790430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.790432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:47 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:47.790433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:48 np0005534696 python3.9[129729]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:48.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:48 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:48 np0005534696 python3.9[129881]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:40:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:48 np0005534696 systemd[1]: Reloading.
Nov 25 04:40:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:48.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:48 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:40:48 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:40:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:49 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:49 np0005534696 python3.9[130070]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:49 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10008030 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:49 np0005534696 python3.9[130149]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:50.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:50 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:40:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:50 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:40:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:50 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:40:50 np0005534696 python3.9[130301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:50 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf34004c80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:50 np0005534696 python3.9[130379]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:50.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:51 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.374828) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651375134, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 290, "num_deletes": 251, "total_data_size": 123080, "memory_usage": 129368, "flush_reason": "Manual Compaction"}
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651376001, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 81017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13151, "largest_seqno": 13436, "table_properties": {"data_size": 79107, "index_size": 138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4703, "raw_average_key_size": 17, "raw_value_size": 75385, "raw_average_value_size": 279, "num_data_blocks": 6, "num_entries": 270, "num_filter_entries": 270, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063648, "oldest_key_time": 1764063648, "file_creation_time": 1764063651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 1191 microseconds, and 571 cpu microseconds.
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376019) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 81017 bytes OK
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376029) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376294) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376305) EVENT_LOG_v1 {"time_micros": 1764063651376302, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376314) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 120931, prev total WAL file size 120931, number of live WAL files 2.
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376684) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(79KB)], [24(13MB)]
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651376714, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 14534679, "oldest_snapshot_seqno": -1}
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4213 keys, 11708725 bytes, temperature: kUnknown
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651402843, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11708725, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11677410, "index_size": 19685, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 107727, "raw_average_key_size": 25, "raw_value_size": 11597240, "raw_average_value_size": 2752, "num_data_blocks": 840, "num_entries": 4213, "num_filter_entries": 4213, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764063651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.402983) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11708725 bytes
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.404108) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 555.3 rd, 447.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.8 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(323.9) write-amplify(144.5) OK, records in: 4723, records dropped: 510 output_compression: NoCompression
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.404122) EVENT_LOG_v1 {"time_micros": 1764063651404115, "job": 12, "event": "compaction_finished", "compaction_time_micros": 26176, "compaction_time_cpu_micros": 17649, "output_level": 6, "num_output_files": 1, "total_output_size": 11708725, "num_input_records": 4723, "num_output_records": 4213, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651404211, "job": 12, "event": "table_file_deletion", "file_number": 26}
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764063651405804, "job": 12, "event": "table_file_deletion", "file_number": 24}
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.376495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.405859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.405863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.405864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.405865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:51 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:40:51.405866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:40:51 np0005534696 python3.9[130532]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:40:51 np0005534696 systemd[1]: Reloading.
Nov 25 04:40:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:51 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:51 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:40:51 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:40:51 np0005534696 systemd[1]: Starting Create netns directory...
Nov 25 04:40:51 np0005534696 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 04:40:51 np0005534696 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 04:40:51 np0005534696 systemd[1]: Finished Create netns directory.
Nov 25 04:40:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 04:40:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:52.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 04:40:52 np0005534696 python3.9[130726]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:40:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:52 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10007020 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 04:40:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:52.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 04:40:52 np0005534696 python3.9[130948]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:53 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf34004c80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:53 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:40:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:40:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:40:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:40:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:40:53 np0005534696 python3.9[131080]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063652.63148-1367-86349295978107/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:40:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:53 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:54.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:54 np0005534696 python3.9[131233]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:40:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:54 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:54 np0005534696 python3.9[131385]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:40:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:54.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:55 np0005534696 python3.9[131508]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063654.315825-1441-212246510928833/.source.json _original_basename=.uzqv2okj follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10007020 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf34004c80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:55 np0005534696 python3.9[131661]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:40:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:40:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:40:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:56.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:40:56 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:40:56 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:40:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:56 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 04:40:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:56.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 04:40:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:57 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:57 np0005534696 python3.9[132115]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 25 04:40:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:57 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:58 np0005534696 python3.9[132268]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 04:40:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 04:40:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:40:58.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 04:40:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:58 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf34005fb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:58 np0005534696 python3.9[132420]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 04:40:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:40:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000007s ======
Nov 25 04:40:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:40:58.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Nov 25 04:40:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094059 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:40:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:59 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:40:59 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:40:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:40:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:40:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:40:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:00.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:00 np0005534696 python3[132618]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 04:41:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:00 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:01 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf34005fb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:01 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:02.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:02 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10007020 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:02.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:03 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:03 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf34005fb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:04.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:04 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:05 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10007020 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:05 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:05 np0005534696 podman[132629]: 2025-11-25 09:41:05.814803123 +0000 UTC m=+5.539615765 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 04:41:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:05 np0005534696 podman[132731]: 2025-11-25 09:41:05.909896786 +0000 UTC m=+0.030904642 container create dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:41:05 np0005534696 podman[132731]: 2025-11-25 09:41:05.895173509 +0000 UTC m=+0.016181366 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 04:41:05 np0005534696 python3[132618]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e
Nov 25 04:41:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:06.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:06 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf34005fb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:06 np0005534696 python3.9[132910]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:41:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf2000a7e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:07 np0005534696 python3.9[133065]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:41:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10007020 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:07 np0005534696 python3.9[133141]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:41:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:08.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:08 np0005534696 python3.9[133293]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063667.8584943-1705-35114068212160/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:41:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:08 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10007020 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:08 np0005534696 python3.9[133369]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 04:41:08 np0005534696 systemd[1]: Reloading.
Nov 25 04:41:08 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:41:08 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:41:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:08.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:09 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10007020 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:09 np0005534696 python3.9[133482]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:41:09 np0005534696 systemd[1]: Reloading.
Nov 25 04:41:09 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:41:09 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:41:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:09 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10007020 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:09 np0005534696 systemd[1]: Starting ovn_controller container...
Nov 25 04:41:09 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:41:09 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75476735a919d2a06ecbeb75824826ef6a15eef74c033d56c4e4b4687739b02e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 04:41:09 np0005534696 systemd[1]: Started /usr/bin/podman healthcheck run dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8.
Nov 25 04:41:09 np0005534696 podman[133523]: 2025-11-25 09:41:09.733663141 +0000 UTC m=+0.082587994 container init dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 04:41:09 np0005534696 ovn_controller[133535]: + sudo -E kolla_set_configs
Nov 25 04:41:09 np0005534696 podman[133523]: 2025-11-25 09:41:09.753578841 +0000 UTC m=+0.102503674 container start dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:41:09 np0005534696 edpm-start-podman-container[133523]: ovn_controller
Nov 25 04:41:09 np0005534696 systemd[1]: Created slice User Slice of UID 0.
Nov 25 04:41:09 np0005534696 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 25 04:41:09 np0005534696 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 25 04:41:09 np0005534696 systemd[1]: Starting User Manager for UID 0...
Nov 25 04:41:09 np0005534696 edpm-start-podman-container[133522]: Creating additional drop-in dependency for "ovn_controller" (dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8)
Nov 25 04:41:09 np0005534696 podman[133543]: 2025-11-25 09:41:09.824248216 +0000 UTC m=+0.063277332 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:41:09 np0005534696 systemd[1]: Reloading.
Nov 25 04:41:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:09 np0005534696 systemd[133561]: Queued start job for default target Main User Target.
Nov 25 04:41:09 np0005534696 systemd[133561]: Created slice User Application Slice.
Nov 25 04:41:09 np0005534696 systemd[133561]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 25 04:41:09 np0005534696 systemd[133561]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 04:41:09 np0005534696 systemd[133561]: Reached target Paths.
Nov 25 04:41:09 np0005534696 systemd[133561]: Reached target Timers.
Nov 25 04:41:09 np0005534696 systemd[133561]: Starting D-Bus User Message Bus Socket...
Nov 25 04:41:09 np0005534696 systemd[133561]: Starting Create User's Volatile Files and Directories...
Nov 25 04:41:09 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:41:09 np0005534696 systemd[133561]: Listening on D-Bus User Message Bus Socket.
Nov 25 04:41:09 np0005534696 systemd[133561]: Reached target Sockets.
Nov 25 04:41:09 np0005534696 systemd[133561]: Finished Create User's Volatile Files and Directories.
Nov 25 04:41:09 np0005534696 systemd[133561]: Reached target Basic System.
Nov 25 04:41:09 np0005534696 systemd[133561]: Reached target Main User Target.
Nov 25 04:41:09 np0005534696 systemd[133561]: Startup finished in 110ms.
Nov 25 04:41:09 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:41:10 np0005534696 systemd[1]: Started User Manager for UID 0.
Nov 25 04:41:10 np0005534696 systemd[1]: Started ovn_controller container.
Nov 25 04:41:10 np0005534696 systemd[1]: dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8-6656789cb5accc86.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 04:41:10 np0005534696 systemd[1]: dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8-6656789cb5accc86.service: Failed with result 'exit-code'.
Nov 25 04:41:10 np0005534696 systemd[1]: Started Session c1 of User root.
Nov 25 04:41:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:10.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: INFO:__main__:Validating config file
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: INFO:__main__:Writing out command to execute
Nov 25 04:41:10 np0005534696 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: ++ cat /run_command
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: + ARGS=
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: + sudo kolla_copy_cacerts
Nov 25 04:41:10 np0005534696 systemd[1]: Started Session c2 of User root.
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: + [[ ! -n '' ]]
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: + . kolla_extend_start
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: + umask 0022
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 25 04:41:10 np0005534696 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 25 04:41:10 np0005534696 NetworkManager[48892]: <info>  [1764063670.1744] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 25 04:41:10 np0005534696 NetworkManager[48892]: <info>  [1764063670.1750] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:41:10 np0005534696 NetworkManager[48892]: <info>  [1764063670.1759] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 25 04:41:10 np0005534696 NetworkManager[48892]: <info>  [1764063670.1764] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 25 04:41:10 np0005534696 NetworkManager[48892]: <info>  [1764063670.1768] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 04:41:10 np0005534696 kernel: br-int: entered promiscuous mode
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 04:41:10 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:10Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 04:41:10 np0005534696 NetworkManager[48892]: <info>  [1764063670.1933] manager: (ovn-2c2076-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 25 04:41:10 np0005534696 NetworkManager[48892]: <info>  [1764063670.1940] manager: (ovn-a23dd6-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 25 04:41:10 np0005534696 NetworkManager[48892]: <info>  [1764063670.1945] manager: (ovn-ad0cdb-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 25 04:41:10 np0005534696 systemd-udevd[133666]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:41:10 np0005534696 kernel: genev_sys_6081: entered promiscuous mode
Nov 25 04:41:10 np0005534696 NetworkManager[48892]: <info>  [1764063670.2059] device (genev_sys_6081): carrier: link connected
Nov 25 04:41:10 np0005534696 NetworkManager[48892]: <info>  [1764063670.2062] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Nov 25 04:41:10 np0005534696 systemd-udevd[133668]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:41:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:10 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10007020 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:10 np0005534696 python3.9[133798]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:41:10 np0005534696 ovs-vsctl[133801]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 25 04:41:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:10.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:11 np0005534696 python3.9[133953]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:41:11 np0005534696 ovs-vsctl[133956]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 25 04:41:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:11 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:11 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf38002600 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:11 np0005534696 python3.9[134109]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:41:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:11 np0005534696 ovs-vsctl[134111]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 25 04:41:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:12.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:12 np0005534696 systemd[1]: session-48.scope: Deactivated successfully.
Nov 25 04:41:12 np0005534696 systemd[1]: session-48.scope: Consumed 40.398s CPU time.
Nov 25 04:41:12 np0005534696 systemd-logind[744]: Session 48 logged out. Waiting for processes to exit.
Nov 25 04:41:12 np0005534696 systemd-logind[744]: Removed session 48.
Nov 25 04:41:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:12 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400bf230 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:12.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:13 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:13 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:14.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:14 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf38003140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:14.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:15 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400bfd70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:15 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:16.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:16 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000003s ======
Nov 25 04:41:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:16.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000003s
Nov 25 04:41:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:17 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf38003140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:17 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400bfd70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:17 np0005534696 systemd-logind[744]: New session 50 of user zuul.
Nov 25 04:41:17 np0005534696 systemd[1]: Started Session 50 of User zuul.
Nov 25 04:41:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:18.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:18 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:18 np0005534696 python3.9[134295]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:41:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:18.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:19 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:19 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf38003140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:19 np0005534696 python3.9[134452]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:20.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:20 np0005534696 python3.9[134630]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:20 np0005534696 systemd[1]: Stopping User Manager for UID 0...
Nov 25 04:41:20 np0005534696 systemd[133561]: Activating special unit Exit the Session...
Nov 25 04:41:20 np0005534696 systemd[133561]: Stopped target Main User Target.
Nov 25 04:41:20 np0005534696 systemd[133561]: Stopped target Basic System.
Nov 25 04:41:20 np0005534696 systemd[133561]: Stopped target Paths.
Nov 25 04:41:20 np0005534696 systemd[133561]: Stopped target Sockets.
Nov 25 04:41:20 np0005534696 systemd[133561]: Stopped target Timers.
Nov 25 04:41:20 np0005534696 systemd[133561]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 04:41:20 np0005534696 systemd[133561]: Closed D-Bus User Message Bus Socket.
Nov 25 04:41:20 np0005534696 systemd[133561]: Stopped Create User's Volatile Files and Directories.
Nov 25 04:41:20 np0005534696 systemd[133561]: Removed slice User Application Slice.
Nov 25 04:41:20 np0005534696 systemd[133561]: Reached target Shutdown.
Nov 25 04:41:20 np0005534696 systemd[133561]: Finished Exit the Session.
Nov 25 04:41:20 np0005534696 systemd[133561]: Reached target Exit the Session.
Nov 25 04:41:20 np0005534696 systemd[1]: user@0.service: Deactivated successfully.
Nov 25 04:41:20 np0005534696 systemd[1]: Stopped User Manager for UID 0.
Nov 25 04:41:20 np0005534696 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 25 04:41:20 np0005534696 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 25 04:41:20 np0005534696 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 25 04:41:20 np0005534696 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 25 04:41:20 np0005534696 systemd[1]: Removed slice User Slice of UID 0.
Nov 25 04:41:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:20 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400bfd70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:20 np0005534696 python3.9[134783]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:20.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:21 np0005534696 python3.9[134935]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:21 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:21 np0005534696 python3.9[135088]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:21 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:22 np0005534696 python3.9[135239]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:41:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:22.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:22 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380045b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:22 np0005534696 python3.9[135391]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 04:41:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:22.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:23 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c11e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:23 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:23 np0005534696 python3.9[135542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:24.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:24 np0005534696 python3.9[135664]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063683.3810637-220-38054517596351/.source follow=False _original_basename=haproxy.j2 checksum=deae64da24ad28f71dc47276f2e9f268f19a4519 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:24 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:24 np0005534696 python3.9[135814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:24.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094125 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:41:25 np0005534696 python3.9[135935]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063684.4718316-265-164521178707280/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:25 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:25 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c11e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:25 np0005534696 python3.9[136088]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:41:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:26.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:26 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:26 np0005534696 python3.9[136173]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:41:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:26.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:27 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380045b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:27 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:28.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:28 np0005534696 python3.9[136329]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 04:41:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:28 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c11e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:28.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:29 np0005534696 python3.9[136482]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:29 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:29 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380052c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:29 np0005534696 python3.9[136605]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063688.651481-376-128988139459853/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:30 np0005534696 python3.9[136756]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:30.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:30 np0005534696 python3.9[136877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063689.7480862-376-280457534293485/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:30 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:30.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:31 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:31 np0005534696 python3.9[137028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:31 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf10006bf0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:31 np0005534696 python3.9[137149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063691.089122-508-191071578477354/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:32.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:32 np0005534696 python3.9[137300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:32 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380052c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:32 np0005534696 python3.9[137421]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063691.9127347-508-103495508065433/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:32 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:41:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:32.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:33 np0005534696 python3.9[137571]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:41:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:33 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:33 np0005534696 python3.9[137726]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:33 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:34 np0005534696 python3.9[137879]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:34.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:34 np0005534696 python3.9[137957]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:34 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1000c030 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:34 np0005534696 python3.9[138109]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:34.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:35 np0005534696 python3.9[138187]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:35 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1000c030 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:35 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:35 np0005534696 python3.9[138340]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:41:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:35 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:41:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:35 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:41:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:36.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:36 np0005534696 python3.9[138493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:36 np0005534696 python3.9[138571]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:41:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:36 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:36.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:36 np0005534696 python3.9[138723]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:37 np0005534696 python3.9[138802]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:41:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:37 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380052c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:37 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1000c030 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:37 np0005534696 python3.9[138954]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:41:37 np0005534696 systemd[1]: Reloading.
Nov 25 04:41:37 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:41:37 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:41:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:38.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:38 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:38 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:41:38 np0005534696 python3.9[139143]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:38.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:39 np0005534696 python3.9[139221]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:41:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:39 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1c007720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:39 np0005534696 python3.9[139374]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:39 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380052c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:39 np0005534696 python3.9[139478]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:41:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:40.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:40 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:40Z|00025|memory|INFO|16256 kB peak resident set size after 30.1 seconds
Nov 25 04:41:40 np0005534696 ovn_controller[133535]: 2025-11-25T09:41:40Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:2
Nov 25 04:41:40 np0005534696 podman[139602]: 2025-11-25 09:41:40.274761178 +0000 UTC m=+0.060482621 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:41:40 np0005534696 python3.9[139647]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:41:40 np0005534696 systemd[1]: Reloading.
Nov 25 04:41:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:40 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1000b000 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:40 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:41:40 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:41:40 np0005534696 systemd[1]: Starting Create netns directory...
Nov 25 04:41:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:40 np0005534696 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 04:41:40 np0005534696 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 04:41:40 np0005534696 systemd[1]: Finished Create netns directory.
Nov 25 04:41:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000003s ======
Nov 25 04:41:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:40.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000003s
Nov 25 04:41:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:41 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1000b000 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:41 np0005534696 python3.9[139846]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:41 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf1000b000 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:41 np0005534696 python3.9[140001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:42.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:42 np0005534696 python3.9[140124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063701.594311-961-216894181430834/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:42 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:42.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:43 np0005534696 python3.9[140276]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:41:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:43 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:43 np0005534696 python3.9[140429]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:41:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:43 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380052c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:43 np0005534696 python3.9[140553]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063703.2360678-1036-190354537239563/.source.json _original_basename=.53i7pkuh follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:41:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:44.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:44 np0005534696 python3.9[140705]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:41:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:44 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380052c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:44.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094145 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:41:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:45 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500048e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:45 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:46.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:46 np0005534696 python3.9[141134]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 25 04:41:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:46 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380052c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:46 np0005534696 python3.9[141286]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 04:41:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:46.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:47 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380052c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:47 np0005534696 python3.9[141440]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 04:41:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:47 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380052c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:48.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:48 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf58002630 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:48.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:49 np0005534696 python3[141612]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 04:41:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:49 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500053e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:49 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:50.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:50 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:50.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:51 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf58003000 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:51 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500053e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:52.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:52 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf380052c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:52.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:53 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:53 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf58003000 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:54.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:54 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500060f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:54.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf38005be0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:55 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:41:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:56.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094156 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:41:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:56 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf58004100 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:56.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:57 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500060f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:57 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf38005be0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:41:58.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:41:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:41:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:41:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:41:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:58 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:41:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:41:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:41:58.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:41:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:59 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:41:59 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500060f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:41:59 np0005534696 podman[141624]: 2025-11-25 09:41:59.798392388 +0000 UTC m=+10.604688517 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:41:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:41:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:41:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:41:59 np0005534696 podman[141841]: 2025-11-25 09:41:59.91089884 +0000 UTC m=+0.032003997 container create f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 04:41:59 np0005534696 podman[141841]: 2025-11-25 09:41:59.895725628 +0000 UTC m=+0.016830785 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:41:59 np0005534696 python3[141612]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:42:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:00.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:00 np0005534696 python3.9[142021]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:42:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:42:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:42:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:42:00 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:42:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:00 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf38005be0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:00.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:01 np0005534696 python3.9[142175]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:01 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf58004400 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:01 np0005534696 python3.9[142252]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:42:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:01 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:02 np0005534696 python3.9[142404]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063721.6069674-1300-153900324645681/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000004s ======
Nov 25 04:42:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:02.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000004s
Nov 25 04:42:02 np0005534696 python3.9[142480]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 04:42:02 np0005534696 systemd[1]: Reloading.
Nov 25 04:42:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:02 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500071f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:02 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:42:02 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:42:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000004s ======
Nov 25 04:42:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:02.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000004s
Nov 25 04:42:03 np0005534696 python3.9[142591]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:42:03 np0005534696 systemd[1]: Reloading.
Nov 25 04:42:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:03 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf38005be0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:03 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:42:03 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:42:03 np0005534696 systemd[1]: Starting ovn_metadata_agent container...
Nov 25 04:42:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:03 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf58004d20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:03 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:42:03 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3b0790478fbc4a38ea95a97b40599d75b48fe98946614f043fa105a777d141/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 25 04:42:03 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3b0790478fbc4a38ea95a97b40599d75b48fe98946614f043fa105a777d141/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:42:03 np0005534696 systemd[1]: Started /usr/bin/podman healthcheck run f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1.
Nov 25 04:42:03 np0005534696 podman[142658]: 2025-11-25 09:42:03.684581097 +0000 UTC m=+0.104197837 container init f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: + sudo -E kolla_set_configs
Nov 25 04:42:03 np0005534696 podman[142658]: 2025-11-25 09:42:03.706433205 +0000 UTC m=+0.126049945 container start f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 04:42:03 np0005534696 edpm-start-podman-container[142658]: ovn_metadata_agent
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Validating config file
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Copying service configuration files
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Writing out command to execute
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: ++ cat /run_command
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: + CMD=neutron-ovn-metadata-agent
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: + ARGS=
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: + sudo kolla_copy_cacerts
Nov 25 04:42:03 np0005534696 podman[142678]: 2025-11-25 09:42:03.768235454 +0000 UTC m=+0.054028068 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 04:42:03 np0005534696 edpm-start-podman-container[142657]: Creating additional drop-in dependency for "ovn_metadata_agent" (f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1)
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: + [[ ! -n '' ]]
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: + . kolla_extend_start
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: Running command: 'neutron-ovn-metadata-agent'
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: + umask 0022
Nov 25 04:42:03 np0005534696 ovn_metadata_agent[142671]: + exec neutron-ovn-metadata-agent
Nov 25 04:42:03 np0005534696 systemd[1]: Reloading.
Nov 25 04:42:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:03 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:42:03 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:42:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:42:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:42:04 np0005534696 systemd[1]: Started ovn_metadata_agent container.
Nov 25 04:42:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:04.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:04 np0005534696 systemd[1]: session-50.scope: Deactivated successfully.
Nov 25 04:42:04 np0005534696 systemd[1]: session-50.scope: Consumed 40.892s CPU time.
Nov 25 04:42:04 np0005534696 systemd-logind[744]: Session 50 logged out. Waiting for processes to exit.
Nov 25 04:42:04 np0005534696 systemd-logind[744]: Removed session 50.
Nov 25 04:42:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:04 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:04.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.276 142676 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.277 142676 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.277 142676 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.278 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.278 142676 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.278 142676 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.278 142676 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.278 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.278 142676 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.279 142676 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.279 142676 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.279 142676 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.279 142676 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.279 142676 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.279 142676 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.279 142676 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.280 142676 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.280 142676 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.280 142676 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.280 142676 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.280 142676 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.281 142676 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.281 142676 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.281 142676 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.281 142676 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.281 142676 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.281 142676 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.282 142676 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.282 142676 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.282 142676 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.282 142676 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.282 142676 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.282 142676 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.282 142676 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.283 142676 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.283 142676 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.283 142676 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.283 142676 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.283 142676 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.283 142676 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.284 142676 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.284 142676 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.284 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.284 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.284 142676 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.284 142676 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.284 142676 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.284 142676 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.285 142676 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.285 142676 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.285 142676 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.285 142676 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.285 142676 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.285 142676 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.285 142676 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.286 142676 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.286 142676 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.286 142676 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.286 142676 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.286 142676 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.286 142676 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.287 142676 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.287 142676 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.287 142676 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.287 142676 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.287 142676 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.287 142676 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.287 142676 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.288 142676 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.288 142676 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.288 142676 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.288 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.288 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.288 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.288 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.289 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.289 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.289 142676 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.289 142676 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.289 142676 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.289 142676 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.289 142676 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.290 142676 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.290 142676 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.290 142676 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.290 142676 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.290 142676 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.290 142676 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.290 142676 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.291 142676 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.291 142676 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.291 142676 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.291 142676 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.291 142676 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.291 142676 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.291 142676 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.292 142676 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.292 142676 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.292 142676 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.292 142676 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.292 142676 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.292 142676 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.292 142676 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.292 142676 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.293 142676 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.293 142676 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.293 142676 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.293 142676 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.293 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.293 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.294 142676 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.294 142676 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.294 142676 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.294 142676 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.294 142676 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.294 142676 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.294 142676 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.295 142676 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.295 142676 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.295 142676 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.295 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.295 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.295 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.295 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.296 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.296 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.296 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.296 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.296 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.296 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.297 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.297 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.297 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.297 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.297 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.297 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.297 142676 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.298 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.298 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.298 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.298 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.298 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.298 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.298 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.299 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.299 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.299 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.299 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.299 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.299 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.299 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.300 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.300 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.300 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.300 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.300 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.300 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.300 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.300 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.301 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.301 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.301 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.301 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.301 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.301 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.301 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.302 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.302 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.302 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.302 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.302 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.302 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.302 142676 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.303 142676 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.303 142676 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.303 142676 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.303 142676 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.303 142676 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.303 142676 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.303 142676 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.304 142676 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.304 142676 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.304 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.304 142676 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.304 142676 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.304 142676 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.304 142676 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.305 142676 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.305 142676 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.305 142676 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.305 142676 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.305 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.305 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.305 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.306 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.306 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.306 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.306 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.306 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.306 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.306 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.307 142676 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.307 142676 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.307 142676 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.307 142676 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.307 142676 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.307 142676 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.307 142676 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.307 142676 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.308 142676 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.308 142676 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.308 142676 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.308 142676 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.308 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.308 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.308 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.309 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.309 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.309 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.309 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.309 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.309 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.309 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.309 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.310 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.310 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.310 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.310 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.310 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.310 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.310 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.311 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.311 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.311 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.311 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.311 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.311 142676 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.311 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.312 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.312 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.312 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.312 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.312 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.312 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.313 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.313 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.313 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.313 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.313 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.313 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.313 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.314 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.314 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.314 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.314 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.314 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.314 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.314 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.315 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.315 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.315 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.315 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.315 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.315 142676 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.315 142676 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.316 142676 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.316 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.316 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.316 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.316 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.316 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.316 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.317 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.317 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.317 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.317 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.317 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.317 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.317 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.317 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.318 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.318 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.318 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.318 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.318 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.318 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.318 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.319 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.319 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.319 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.319 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.319 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.319 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.320 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.320 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.320 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.320 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.320 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.320 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.320 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.320 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.321 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.321 142676 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.321 142676 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.328 142676 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.328 142676 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.328 142676 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.328 142676 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.329 142676 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 25 04:42:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:05 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500071f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.339 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name f116e443-3007-4d69-b0d6-1b58bbc026ea (UUID: f116e443-3007-4d69-b0d6-1b58bbc026ea) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.356 142676 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.356 142676 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.356 142676 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.356 142676 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.358 142676 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.363 142676 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.367 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'f116e443-3007-4d69-b0d6-1b58bbc026ea'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], external_ids={}, name=f116e443-3007-4d69-b0d6-1b58bbc026ea, nb_cfg_timestamp=1764063678188, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.368 142676 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7facf8b10a00>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.369 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.369 142676 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.369 142676 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.369 142676 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.373 142676 DEBUG oslo_service.service [-] Started child 142781 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.376 142781 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-430728'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.376 142676 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp0soymjuj/privsep.sock']#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.392 142781 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.393 142781 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.393 142781 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.395 142781 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.399 142781 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.403 142781 INFO eventlet.wsgi.server [-] (142781) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 25 04:42:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:05 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf38005be0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:05 np0005534696 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 25 04:42:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.931 142676 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.932 142676 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp0soymjuj/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.840 142787 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.843 142787 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.845 142787 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.845 142787 INFO oslo.privsep.daemon [-] privsep daemon running as pid 142787#033[00m
Nov 25 04:42:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:05.934 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[5d046232-6894-4bcd-a77f-ebf06377df9c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:42:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:06.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.328 142787 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.328 142787 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.328 142787 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:42:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:06 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf58004d20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.761 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4d8aaa-7b4f-4638-8bc9-b247d69a14d4]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.763 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, column=external_ids, values=({'neutron:ovn-metadata-id': 'c51241bd-45fd-51e0-b681-ef3499eea07a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.769 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.773 142676 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.774 142676 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.774 142676 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.774 142676 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.774 142676 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.774 142676 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.774 142676 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.774 142676 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.775 142676 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.775 142676 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.775 142676 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.775 142676 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.775 142676 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.775 142676 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.775 142676 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.776 142676 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.776 142676 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.776 142676 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.776 142676 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.776 142676 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.776 142676 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.776 142676 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.777 142676 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.777 142676 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.777 142676 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.777 142676 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.777 142676 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.777 142676 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.778 142676 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.778 142676 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.778 142676 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.778 142676 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.778 142676 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.778 142676 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.779 142676 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.779 142676 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.779 142676 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.779 142676 DEBUG oslo_service.service [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.779 142676 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.779 142676 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.780 142676 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.780 142676 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.780 142676 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.780 142676 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.780 142676 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.780 142676 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.780 142676 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.780 142676 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.781 142676 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.781 142676 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.781 142676 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.781 142676 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.781 142676 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.781 142676 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.781 142676 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.782 142676 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.782 142676 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.782 142676 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.782 142676 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.782 142676 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.782 142676 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.782 142676 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.783 142676 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.783 142676 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.783 142676 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.783 142676 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.783 142676 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.783 142676 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.783 142676 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.783 142676 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.784 142676 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.784 142676 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.784 142676 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.784 142676 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.784 142676 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.784 142676 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.784 142676 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.785 142676 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.785 142676 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.785 142676 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.785 142676 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.785 142676 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.785 142676 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.785 142676 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.786 142676 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.786 142676 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.786 142676 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.786 142676 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.786 142676 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.786 142676 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.786 142676 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.787 142676 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.787 142676 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.787 142676 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.787 142676 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.787 142676 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.787 142676 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.787 142676 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.787 142676 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.788 142676 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.788 142676 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.788 142676 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.788 142676 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.788 142676 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.788 142676 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.788 142676 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.788 142676 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.789 142676 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.789 142676 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.789 142676 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.789 142676 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.789 142676 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.789 142676 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.790 142676 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.790 142676 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.790 142676 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.790 142676 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.790 142676 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.790 142676 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.790 142676 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.791 142676 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.791 142676 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.791 142676 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.791 142676 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.791 142676 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.791 142676 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.792 142676 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.792 142676 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.792 142676 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.792 142676 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.792 142676 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.792 142676 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.792 142676 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.792 142676 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.793 142676 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.793 142676 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.793 142676 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.793 142676 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.793 142676 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.793 142676 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.794 142676 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.794 142676 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.794 142676 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.794 142676 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.794 142676 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.794 142676 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.794 142676 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.795 142676 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.795 142676 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.795 142676 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.795 142676 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.795 142676 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.795 142676 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.795 142676 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.795 142676 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.796 142676 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.796 142676 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.796 142676 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.796 142676 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.796 142676 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.796 142676 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.796 142676 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.797 142676 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.797 142676 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.797 142676 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.797 142676 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.797 142676 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.797 142676 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.797 142676 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.797 142676 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.798 142676 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.798 142676 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.798 142676 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.798 142676 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.798 142676 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.798 142676 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.798 142676 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.799 142676 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.799 142676 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.799 142676 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.799 142676 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.799 142676 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.799 142676 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.799 142676 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.800 142676 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.800 142676 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.800 142676 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.800 142676 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.800 142676 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.800 142676 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.800 142676 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.800 142676 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.801 142676 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.801 142676 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.801 142676 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.801 142676 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.801 142676 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.801 142676 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.801 142676 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.802 142676 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.802 142676 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.802 142676 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.802 142676 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.802 142676 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.802 142676 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.802 142676 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.802 142676 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.803 142676 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.803 142676 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.803 142676 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.803 142676 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.803 142676 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.803 142676 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.803 142676 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.804 142676 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.804 142676 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.804 142676 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.804 142676 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.804 142676 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.804 142676 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.804 142676 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.804 142676 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.805 142676 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.805 142676 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.805 142676 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.805 142676 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.805 142676 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.805 142676 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.805 142676 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.805 142676 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.806 142676 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.806 142676 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.806 142676 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.806 142676 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.806 142676 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.806 142676 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.806 142676 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.807 142676 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.807 142676 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.807 142676 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.807 142676 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.807 142676 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.807 142676 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.807 142676 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.808 142676 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.808 142676 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.808 142676 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.808 142676 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.808 142676 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.808 142676 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.809 142676 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.809 142676 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.809 142676 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.809 142676 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.809 142676 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.809 142676 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.809 142676 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.809 142676 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.810 142676 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.810 142676 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.810 142676 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.810 142676 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.810 142676 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.810 142676 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.810 142676 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.810 142676 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.810 142676 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.810 142676 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.811 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.811 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.811 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.811 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.811 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.811 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.811 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.811 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.811 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.812 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.812 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.812 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.812 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.812 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.812 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.812 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.812 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.812 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.812 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.813 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.813 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.813 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.813 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.813 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.813 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.813 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.813 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.813 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.813 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.814 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.814 142676 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.814 142676 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.814 142676 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.814 142676 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.814 142676 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:42:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:42:06.814 142676 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 04:42:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:06.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500071f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:07 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500071f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:42:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:08.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:42:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:08 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:08.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:09 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf58004d20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:09 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500071f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:10 np0005534696 systemd-logind[744]: New session 51 of user zuul.
Nov 25 04:42:10 np0005534696 systemd[1]: Started Session 51 of User zuul.
Nov 25 04:42:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:10.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:10 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf38005be0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:10 np0005534696 podman[142923]: 2025-11-25 09:42:10.681477102 +0000 UTC m=+0.069590658 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:42:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:10 np0005534696 python3.9[142959]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:42:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:10.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:11 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:11 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:11 np0005534696 python3.9[143129]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:42:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:12 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:42:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:12.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:12 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:12 np0005534696 python3.9[143292]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 04:42:12 np0005534696 systemd[1]: Reloading.
Nov 25 04:42:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:12.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:13 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:42:13 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:42:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:13 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:13 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:13 np0005534696 python3.9[143477]: ansible-ansible.builtin.service_facts Invoked
Nov 25 04:42:13 np0005534696 network[143495]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 04:42:13 np0005534696 network[143496]: 'network-scripts' will be removed from distribution in near future.
Nov 25 04:42:13 np0005534696 network[143497]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 04:42:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:14.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:14 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:14.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:15 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:42:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:15 : epoch 6925796b : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:42:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:15 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:15 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf400c22e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:16.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:16 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf38005be0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:16.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:17 np0005534696 python3.9[143762]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:42:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:17 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf500071f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[120174]: 25/11/2025 09:42:17 : epoch 6925796b : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7faf58004d20 fd 47 proxy ignored for local
Nov 25 04:42:17 np0005534696 kernel: ganesha.nfsd[141387]: segfault at 50 ip 00007fafd0b0d32e sp 00007faf9dffa210 error 4 in libntirpc.so.5.8[7fafd0af2000+2c000] likely on CPU 3 (core 0, socket 3)
Nov 25 04:42:17 np0005534696 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 04:42:17 np0005534696 systemd[1]: Started Process Core Dump (PID 143916/UID 0).
Nov 25 04:42:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:17 np0005534696 python3.9[143915]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:42:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:42:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:18.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:42:18 np0005534696 python3.9[144071]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:42:18 np0005534696 systemd-coredump[143917]: Process 120178 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 63:#012#0  0x00007fafd0b0d32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 25 04:42:18 np0005534696 systemd[1]: systemd-coredump@3-143916-0.service: Deactivated successfully.
Nov 25 04:42:18 np0005534696 systemd[1]: systemd-coredump@3-143916-0.service: Consumed 1.047s CPU time.
Nov 25 04:42:18 np0005534696 podman[144229]: 2025-11-25 09:42:18.822052012 +0000 UTC m=+0.019654202 container died dcb3c585579a96cca5525c4a3ea45c8cb312183165c797862c3dac105714cbb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 25 04:42:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:18.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:19 np0005534696 systemd[1]: var-lib-containers-storage-overlay-590403ef31678cae08a1e6a193d0c986c749d97cdea235ce0c37004648525303-merged.mount: Deactivated successfully.
Nov 25 04:42:19 np0005534696 podman[144229]: 2025-11-25 09:42:19.189887047 +0000 UTC m=+0.387489227 container remove dcb3c585579a96cca5525c4a3ea45c8cb312183165c797862c3dac105714cbb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:42:19 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Main process exited, code=exited, status=139/n/a
Nov 25 04:42:19 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Failed with result 'exit-code'.
Nov 25 04:42:19 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.133s CPU time.
Nov 25 04:42:19 np0005534696 python3.9[144228]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:42:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:19 np0005534696 python3.9[144417]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:42:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:20.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:20 np0005534696 python3.9[144595]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:42:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:20.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:21 np0005534696 python3.9[144748]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:42:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:21 np0005534696 python3.9[144903]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:22.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:22 np0005534696 python3.9[145055]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:22 np0005534696 python3.9[145207]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:22.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:23 np0005534696 python3.9[145360]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:23 np0005534696 python3.9[145512]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094223 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:42:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:24 np0005534696 python3.9[145665]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:24.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094224 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:42:24 np0005534696 python3.9[145817]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:24.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:25 np0005534696 python3.9[145969]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:25 np0005534696 python3.9[146122]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:25 np0005534696 python3.9[146275]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:26.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:26 np0005534696 python3.9[146427]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:26 np0005534696 python3.9[146579]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:26.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:27 np0005534696 python3.9[146732]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:27 np0005534696 python3.9[146884]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:42:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:28.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:28 np0005534696 python3.9[147037]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:42:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:28.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:29 np0005534696 python3.9[147190]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 04:42:29 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Scheduled restart job, restart counter is at 4.
Nov 25 04:42:29 np0005534696 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:42:29 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.133s CPU time.
Nov 25 04:42:29 np0005534696 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:42:29 np0005534696 podman[147328]: 2025-11-25 09:42:29.692539823 +0000 UTC m=+0.034923086 container create e9b26ee0cdfd1574982440200acae1b90f9cb988aa79eb6267e87f95f3cd119b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:42:29 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9175878127aec9dace65bda13ce469f90169e555e1d4de5a83318ab63b9f968e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 04:42:29 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9175878127aec9dace65bda13ce469f90169e555e1d4de5a83318ab63b9f968e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:42:29 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9175878127aec9dace65bda13ce469f90169e555e1d4de5a83318ab63b9f968e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:42:29 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9175878127aec9dace65bda13ce469f90169e555e1d4de5a83318ab63b9f968e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.jouchy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:42:29 np0005534696 podman[147328]: 2025-11-25 09:42:29.743254407 +0000 UTC m=+0.085637680 container init e9b26ee0cdfd1574982440200acae1b90f9cb988aa79eb6267e87f95f3cd119b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:42:29 np0005534696 podman[147328]: 2025-11-25 09:42:29.749533732 +0000 UTC m=+0.091917005 container start e9b26ee0cdfd1574982440200acae1b90f9cb988aa79eb6267e87f95f3cd119b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:42:29 np0005534696 bash[147328]: e9b26ee0cdfd1574982440200acae1b90f9cb988aa79eb6267e87f95f3cd119b
Nov 25 04:42:29 np0005534696 podman[147328]: 2025-11-25 09:42:29.678145014 +0000 UTC m=+0.020528297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:42:29 np0005534696 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:42:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 04:42:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 04:42:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 04:42:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 04:42:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 04:42:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 04:42:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 04:42:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:42:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:30 np0005534696 python3.9[147398]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 04:42:30 np0005534696 systemd[1]: Reloading.
Nov 25 04:42:30 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:42:30 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:42:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:42:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:30.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:42:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:30 np0005534696 python3.9[147621]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:42:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:42:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:30.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:42:31 np0005534696 python3.9[147775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:42:31 np0005534696 python3.9[147928]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:42:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:32.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:32 np0005534696 python3.9[148082]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:42:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:32 np0005534696 python3.9[148235]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:42:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:42:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:32.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:42:33 np0005534696 python3.9[148389]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:42:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:33 np0005534696 podman[148543]: 2025-11-25 09:42:33.900197519 +0000 UTC m=+0.056350326 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 04:42:33 np0005534696 python3.9[148544]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:42:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.002000019s ======
Nov 25 04:42:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:34.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000019s
Nov 25 04:42:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:34.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:35 np0005534696 python3.9[148713]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 25 04:42:35 np0005534696 python3.9[148867]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 04:42:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:42:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:42:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:42:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:36.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:42:36 np0005534696 python3.9[149026]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 04:42:36 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:42:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:36.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:37 np0005534696 python3.9[149188]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:42:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:38 np0005534696 python3.9[149273]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:42:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.002000020s ======
Nov 25 04:42:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:38.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Nov 25 04:42:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:38.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:40.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:40.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:41 np0005534696 podman[149313]: 2025-11-25 09:42:41.364514873 +0000 UTC m=+0.072237889 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 04:42:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:42:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:42.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:42 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:42.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:42:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:44.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:42:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:44 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:45.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd40001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094245 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:42:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:46.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:46 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:47.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd40002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:42:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:48.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:42:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:48 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:42:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:49.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:42:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:50.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:50 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd40002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:50 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:42:50 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2552 writes, 14K keys, 2552 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s#012Cumulative WAL: 2552 writes, 2552 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2552 writes, 14K keys, 2552 commit groups, 1.0 writes per commit group, ingest: 38.61 MB, 0.06 MB/s#012Interval WAL: 2552 writes, 2552 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    459.6      0.05              0.03         6    0.008       0      0       0.0       0.0#012  L6      1/0   11.17 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9    512.5    441.0      0.14              0.09         5    0.027     19K   2263       0.0       0.0#012 Sum      1/0   11.17 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    384.0    445.7      0.18              0.12        11    0.016     19K   2263       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    385.7    447.7      0.18              0.12        10    0.018     19K   2263       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    512.5    441.0      0.14              0.09         5    0.027     19K   2263       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    467.9      0.04              0.03         5    0.009       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.020, interval 0.020#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56183fd2f350#2 capacity: 304.00 MB usage: 2.75 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 8.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(157,2.56 MB,0.841166%) FilterBlock(11,63.92 KB,0.0205341%) IndexBlock(11,131.53 KB,0.0422528%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 04:42:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:51.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44004160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:42:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:52.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:42:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:52 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44004160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd40002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44004160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:42:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:54.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:42:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:54 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:55.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd40003b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:42:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:42:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:56.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:42:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:56 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd40003b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:42:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:57.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:42:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:42:58.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:58 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:42:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:42:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:42:59.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:42:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd40003b90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:42:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:42:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:42:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:42:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:42:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:00.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:00 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:01.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:02.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:02 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:03.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:04 np0005534696 podman[149491]: 2025-11-25 09:43:04.03019783 +0000 UTC m=+0.040324244 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 04:43:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:04.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 25 04:43:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:43:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:43:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:43:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:43:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:43:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:43:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:04 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd40004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:43:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:05.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:43:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:43:05.331 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:43:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:43:05.331 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:43:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:43:05.331 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:43:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:43:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:06.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:43:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:06 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:07.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd40004c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:08.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:08 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:43:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:43:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:09.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:10.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:10 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440055f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:11.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:12.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:12 np0005534696 podman[149664]: 2025-11-25 09:43:12.350253299 +0000 UTC m=+0.059090182 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:43:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:12 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd50001220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:13.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44007790 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44007790 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:14.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:14 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:43:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:15.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:43:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd50001d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:43:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5911 writes, 25K keys, 5911 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5911 writes, 1013 syncs, 5.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5911 writes, 25K keys, 5911 commit groups, 1.0 writes per commit group, ingest: 19.14 MB, 0.03 MB/s#012Interval WAL: 5911 writes, 1013 syncs, 5.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 25 04:43:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c001df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:16.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:16 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44007790 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:43:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:17.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:43:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44007790 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd50001d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:18.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:18 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:19.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44007790 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44007790 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:20.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:20 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd50001d40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:21.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44007790 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:43:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:22.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:43:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:22 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44007790 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:23.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500031d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:24.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:24 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:24 np0005534696 kernel: SELinux:  Converting 2774 SID table entries...
Nov 25 04:43:24 np0005534696 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 04:43:24 np0005534696 kernel: SELinux:  policy capability open_perms=1
Nov 25 04:43:24 np0005534696 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 04:43:24 np0005534696 kernel: SELinux:  policy capability always_check_network=0
Nov 25 04:43:24 np0005534696 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 04:43:24 np0005534696 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 04:43:24 np0005534696 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 04:43:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:25.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500031d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:26.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:26 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c009e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:43:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:27.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:43:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:28.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:28 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500031d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:29.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c009e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:30.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:30 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:31.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c009e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:31 np0005534696 kernel: SELinux:  Converting 2774 SID table entries...
Nov 25 04:43:31 np0005534696 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 04:43:31 np0005534696 kernel: SELinux:  policy capability open_perms=1
Nov 25 04:43:31 np0005534696 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 04:43:31 np0005534696 kernel: SELinux:  policy capability always_check_network=0
Nov 25 04:43:31 np0005534696 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 04:43:31 np0005534696 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 04:43:31 np0005534696 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 04:43:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:32 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:33.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:34 np0005534696 dbus-broker-launch[731]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 25 04:43:34 np0005534696 podman[149861]: 2025-11-25 09:43:34.327104162 +0000 UTC m=+0.036784900 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 04:43:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:34 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c009e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:35.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:36.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:36 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:37.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c009e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:38 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x55ec448d3210 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:39.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:40.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:40 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c009e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:41.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c009e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c009e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:42.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:42 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:43 np0005534696 podman[152228]: 2025-11-25 09:43:43.344542036 +0000 UTC m=+0.061802705 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:43:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd5c0039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00bcb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:44.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:44 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00bcb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:45.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd5c0044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:46.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:46 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00bcb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:47.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00bcb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:48.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:48 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd5c0044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00c9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00c9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:50.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:50 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00c9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd5c0044e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:52.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:52 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00c9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00c9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd5c005970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:54.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:54 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:55.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00c9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00c9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:43:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:56.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:56 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00c9c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:57.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:43:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:43:58.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:43:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:58 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:43:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:43:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:43:59.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:43:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:43:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:43:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:43:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:43:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:43:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:00.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:00 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:01.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:02.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:02 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:04.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:04 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:05 np0005534696 podman[166780]: 2025-11-25 09:44:05.328254103 +0000 UTC m=+0.038644024 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 04:44:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:44:05.330 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:44:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:44:05.331 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:44:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:44:05.331 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:44:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:44:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:06.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:44:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:06 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:07.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd500042d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:07 np0005534696 kernel: SELinux:  Converting 2775 SID table entries...
Nov 25 04:44:07 np0005534696 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 04:44:07 np0005534696 kernel: SELinux:  policy capability open_perms=1
Nov 25 04:44:07 np0005534696 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 04:44:07 np0005534696 kernel: SELinux:  policy capability always_check_network=0
Nov 25 04:44:07 np0005534696 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 04:44:07 np0005534696 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 04:44:07 np0005534696 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 04:44:07 np0005534696 dbus-broker-launch[731]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 25 04:44:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:08.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:08 np0005534696 dbus-broker-launch[725]: Noticed file-system modification, trigger reload.
Nov 25 04:44:08 np0005534696 dbus-broker-launch[725]: Noticed file-system modification, trigger reload.
Nov 25 04:44:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:44:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:44:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:44:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:44:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:08 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:44:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:09.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:44:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd5c006290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:44:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:10.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:44:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:10 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:11.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:12.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:12 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:44:12 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:44:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:12 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:13.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:13 np0005534696 systemd[1]: Stopping OpenSSH server daemon...
Nov 25 04:44:13 np0005534696 systemd[1]: sshd.service: Deactivated successfully.
Nov 25 04:44:13 np0005534696 systemd[1]: Stopped OpenSSH server daemon.
Nov 25 04:44:13 np0005534696 systemd[1]: sshd.service: Consumed 1.401s CPU time, read 32.0K from disk, written 0B to disk.
Nov 25 04:44:13 np0005534696 systemd[1]: Stopped target sshd-keygen.target.
Nov 25 04:44:13 np0005534696 systemd[1]: Stopping sshd-keygen.target...
Nov 25 04:44:13 np0005534696 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 04:44:13 np0005534696 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 04:44:13 np0005534696 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 04:44:13 np0005534696 systemd[1]: Reached target sshd-keygen.target.
Nov 25 04:44:13 np0005534696 systemd[1]: Starting OpenSSH server daemon...
Nov 25 04:44:13 np0005534696 systemd[1]: Started OpenSSH server daemon.
Nov 25 04:44:13 np0005534696 podman[167817]: 2025-11-25 09:44:13.851084811 +0000 UTC m=+0.060253773 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:44:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:14.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:14 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:14 np0005534696 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 04:44:14 np0005534696 systemd[1]: Starting man-db-cache-update.service...
Nov 25 04:44:15 np0005534696 systemd[1]: Reloading.
Nov 25 04:44:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:15.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:15 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:44:15 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:44:15 np0005534696 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 04:44:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:16.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:16 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:17.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64003140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:17 np0005534696 python3.9[172643]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 04:44:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:17 np0005534696 systemd[1]: Reloading.
Nov 25 04:44:17 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:44:17 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:44:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:18.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:18 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:18 np0005534696 python3.9[174299]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 04:44:18 np0005534696 systemd[1]: Reloading.
Nov 25 04:44:19 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:44:19 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:44:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:19.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:19 np0005534696 python3.9[175725]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 04:44:19 np0005534696 systemd[1]: Reloading.
Nov 25 04:44:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:19 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:44:19 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:44:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:44:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:20.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:44:20 np0005534696 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 04:44:20 np0005534696 systemd[1]: Finished man-db-cache-update.service.
Nov 25 04:44:20 np0005534696 systemd[1]: man-db-cache-update.service: Consumed 6.830s CPU time.
Nov 25 04:44:20 np0005534696 systemd[1]: run-r6fcfdca3441c42e9b001bfb2956d429b.service: Deactivated successfully.
Nov 25 04:44:20 np0005534696 python3.9[177121]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 04:44:20 np0005534696 systemd[1]: Reloading.
Nov 25 04:44:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:20 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:20 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:44:20 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:44:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:21.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64003a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:21 np0005534696 python3.9[177435]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:21 np0005534696 systemd[1]: Reloading.
Nov 25 04:44:21 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:44:21 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:44:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:22.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:22 np0005534696 python3.9[177626]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:22 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:22 np0005534696 systemd[1]: Reloading.
Nov 25 04:44:22 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:44:22 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:44:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:23.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:23 np0005534696 python3.9[177817]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:23 np0005534696 systemd[1]: Reloading.
Nov 25 04:44:23 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:44:23 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:44:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:24.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:24 np0005534696 python3.9[178007]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:24 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64003a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:25 np0005534696 python3.9[178162]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:25 np0005534696 systemd[1]: Reloading.
Nov 25 04:44:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:25.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:25 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:44:25 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:44:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:26 np0005534696 python3.9[178353]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 04:44:26 np0005534696 systemd[1]: Reloading.
Nov 25 04:44:26 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:44:26 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:44:26 np0005534696 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 25 04:44:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:26.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:26 np0005534696 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 25 04:44:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:26 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:26 np0005534696 python3.9[178546]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:27.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64003a60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:27 np0005534696 python3.9[178702]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:28.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:28 np0005534696 python3.9[178858]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:28 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:28 np0005534696 python3.9[179013]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:29.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d6f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:29 np0005534696 python3.9[179169]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:30 np0005534696 python3.9[179325]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:30.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:30 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:30 np0005534696 python3.9[179480]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:31.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:31 np0005534696 python3.9[179636]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:32 np0005534696 python3.9[179792]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:32.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:32 np0005534696 python3.9[179947]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:32 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:33.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:33 np0005534696 python3.9[180102]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d750 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:33 np0005534696 python3.9[180258]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:34.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:34 np0005534696 python3.9[180414]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:34 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:44:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:35.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:44:35 np0005534696 python3.9[180569]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 04:44:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:35 np0005534696 podman[180698]: 2025-11-25 09:44:35.904936869 +0000 UTC m=+0.062994830 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:44:36 np0005534696 python3.9[180742]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:44:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:36.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:36 np0005534696 python3.9[180894]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:44:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:36 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:37 np0005534696 python3.9[181046]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:44:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:37.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:37 np0005534696 python3.9[181199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:44:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:38 np0005534696 python3.9[181352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:44:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:38.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:38 np0005534696 python3.9[181504]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:44:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:38 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:39 np0005534696 python3.9[181656]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:39.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:39 np0005534696 python3.9[181782]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063878.6748285-1625-221887539120104/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:40 np0005534696 python3.9[181935]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:40.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:40 np0005534696 python3.9[182085]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063879.7638946-1625-105270663370819/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:40 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:40 np0005534696 python3.9[182238]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:41.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:41 np0005534696 python3.9[182364]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063880.615659-1625-242233876573670/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680bf270 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:41 np0005534696 python3.9[182516]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:42 np0005534696 python3.9[182643]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063881.499562-1625-281412083966986/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:42.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:42 np0005534696 python3.9[182795]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:42 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:43.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:43 np0005534696 python3.9[182921]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063882.3905811-1625-68052892700018/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:43 np0005534696 python3.9[183073]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:44 np0005534696 podman[183171]: 2025-11-25 09:44:44.06855726 +0000 UTC m=+0.062781217 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 04:44:44 np0005534696 python3.9[183216]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063883.3709214-1625-178108622882446/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:44.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:44 np0005534696 python3.9[183375]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:44 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680bfdb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:45 np0005534696 python3.9[183498]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063884.3176825-1625-206081722696547/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:45.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:45 np0005534696 python3.9[183651]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d770 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:45 np0005534696 python3.9[183777]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764063885.230733-1625-100101147537183/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:46.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:46 np0005534696 python3.9[183929]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 25 04:44:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:46 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd700053a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:47 np0005534696 python3.9[184082]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:47.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680bfdb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:47 np0005534696 python3.9[184235]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:47 np0005534696 python3.9[184388]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:48.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:48 np0005534696 python3.9[184540]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:48 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:48 np0005534696 python3.9[184692]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:49.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd700053a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:49 np0005534696 python3.9[184845]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680bfdb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:49 np0005534696 python3.9[184998]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:50.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:50 np0005534696 python3.9[185150]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:50 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:50 np0005534696 python3.9[185302]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:51.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:51 np0005534696 python3.9[185455]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:51 np0005534696 python3.9[185607]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd700060b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:52.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:52 np0005534696 python3.9[185760]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:52 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c1220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:52 np0005534696 python3.9[185912]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:53.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:53 np0005534696 python3.9[186065]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d950 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:53 np0005534696 python3.9[186217]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:54 np0005534696 python3.9[186341]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063893.5152292-2287-135887136076897/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:54.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:54 np0005534696 python3.9[186493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:54 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd700060b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:55 np0005534696 python3.9[186616]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063894.3668168-2287-160900322413694/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:55.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c1220 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:55 np0005534696 python3.9[186769]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:44:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:56 np0005534696 python3.9[186893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063895.344977-2287-22375415421948/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:56.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:56 np0005534696 python3.9[187045]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:56 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:56 np0005534696 python3.9[187168]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063896.223539-2287-86357556163037/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:57.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:57 np0005534696 python3.9[187321]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70006dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70006dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:57 np0005534696 python3.9[187444]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063897.0705874-2287-48793179724318/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:44:58.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:58 np0005534696 python3.9[187597]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:58 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:59 np0005534696 python3.9[187720]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063897.9976377-2287-106742323215820/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:44:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:44:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:44:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:44:59.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:44:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:59 np0005534696 python3.9[187873]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:44:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:44:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:44:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:44:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:44:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:44:59 np0005534696 python3.9[187997]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063899.1559227-2287-215793338554874/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:00.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:00 np0005534696 python3.9[188149]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:00 np0005534696 python3.9[188297]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063899.9973347-2287-121935105849520/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:00 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:01.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:01 np0005534696 python3.9[188450]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:01 np0005534696 python3.9[188573]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063900.8801887-2287-217633472721200/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c1f30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:02 np0005534696 python3.9[188726]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:02.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:02 np0005534696 python3.9[188849]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063901.8496275-2287-8983800977761/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:02 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70006dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:03 np0005534696 python3.9[189001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:03.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:03 np0005534696 python3.9[189125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063902.7345254-2287-204959572387135/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:04 np0005534696 python3.9[189278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:04.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:04 np0005534696 python3.9[189401]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063903.6465776-2287-32284593397672/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:04 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:05 np0005534696 python3.9[189553]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:05.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:45:05.341 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:45:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:45:05.344 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:45:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:45:05.345 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:45:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70006dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:05 np0005534696 python3.9[189677]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063904.6760564-2287-18370174724972/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:05 np0005534696 python3.9[189830]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:06 np0005534696 podman[189925]: 2025-11-25 09:45:06.272204766 +0000 UTC m=+0.048483293 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 04:45:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:06.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:06 np0005534696 python3.9[189969]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063905.5915148-2287-240688840022864/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:06 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:07.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:07 np0005534696 python3.9[190120]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:45:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70006dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:08 np0005534696 python3.9[190276]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 25 04:45:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:08.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:08 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:09.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64004d90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:09 np0005534696 dbus-broker-launch[731]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 25 04:45:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:09 np0005534696 python3.9[190433]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:10 np0005534696 python3.9[190586]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:10.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094510 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:45:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:10 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:10 np0005534696 python3.9[190738]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:11.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:11 np0005534696 python3.9[190891]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c00d990 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:11 np0005534696 python3.9[191043]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:12 np0005534696 python3.9[191262]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:12.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:12 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 04:45:12 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:45:12 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 04:45:12 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:45:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:12 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:12 np0005534696 python3.9[191428]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:13.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:13 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:45:13 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:45:13 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:45:13 np0005534696 python3.9[191582]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:13 np0005534696 python3.9[191735]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:14 np0005534696 podman[191859]: 2025-11-25 09:45:14.30332269 +0000 UTC m=+0.066848356 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:45:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:14.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:14 np0005534696 python3.9[191895]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:14 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:15 np0005534696 python3.9[192062]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:45:15 np0005534696 systemd[1]: Reloading.
Nov 25 04:45:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:15.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:15 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:45:15 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:45:15 np0005534696 systemd[1]: Starting libvirt logging daemon socket...
Nov 25 04:45:15 np0005534696 systemd[1]: Listening on libvirt logging daemon socket.
Nov 25 04:45:15 np0005534696 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 25 04:45:15 np0005534696 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 25 04:45:15 np0005534696 systemd[1]: Starting libvirt logging daemon...
Nov 25 04:45:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:15 np0005534696 systemd[1]: Started libvirt logging daemon.
Nov 25 04:45:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:16 np0005534696 python3.9[192281]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:45:16 np0005534696 systemd[1]: Reloading.
Nov 25 04:45:16 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:45:16 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:45:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:45:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:16.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:45:16 np0005534696 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 25 04:45:16 np0005534696 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 25 04:45:16 np0005534696 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 25 04:45:16 np0005534696 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 25 04:45:16 np0005534696 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 25 04:45:16 np0005534696 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 25 04:45:16 np0005534696 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 04:45:16 np0005534696 systemd[1]: Started libvirt nodedev daemon.
Nov 25 04:45:16 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:45:16 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:45:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:16 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:16 np0005534696 python3.9[192498]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:45:17 np0005534696 systemd[1]: Reloading.
Nov 25 04:45:17 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:45:17 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:45:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:17.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:17 np0005534696 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 25 04:45:17 np0005534696 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 25 04:45:17 np0005534696 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 25 04:45:17 np0005534696 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 25 04:45:17 np0005534696 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 25 04:45:17 np0005534696 systemd[1]: Starting libvirt proxy daemon...
Nov 25 04:45:17 np0005534696 systemd[1]: Started libvirt proxy daemon.
Nov 25 04:45:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:17 np0005534696 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 25 04:45:17 np0005534696 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 25 04:45:17 np0005534696 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 25 04:45:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:17 np0005534696 python3.9[192717]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:45:17 np0005534696 systemd[1]: Reloading.
Nov 25 04:45:18 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:45:18 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:45:18 np0005534696 systemd[1]: Listening on libvirt locking daemon socket.
Nov 25 04:45:18 np0005534696 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 25 04:45:18 np0005534696 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 25 04:45:18 np0005534696 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 25 04:45:18 np0005534696 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 25 04:45:18 np0005534696 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 25 04:45:18 np0005534696 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 25 04:45:18 np0005534696 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 25 04:45:18 np0005534696 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 25 04:45:18 np0005534696 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 25 04:45:18 np0005534696 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 04:45:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:18.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:18 np0005534696 systemd[1]: Started libvirt QEMU daemon.
Nov 25 04:45:18 np0005534696 setroubleshoot[192535]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e7ff8867-1d53-47e5-a750-ebbfeb0e9954
Nov 25 04:45:18 np0005534696 setroubleshoot[192535]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 25 04:45:18 np0005534696 setroubleshoot[192535]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e7ff8867-1d53-47e5-a750-ebbfeb0e9954
Nov 25 04:45:18 np0005534696 setroubleshoot[192535]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 25 04:45:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:18 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:18 np0005534696 python3.9[192936]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:45:18 np0005534696 systemd[1]: Reloading.
Nov 25 04:45:19 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:45:19 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:45:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:19.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:19 np0005534696 systemd[1]: Starting libvirt secret daemon socket...
Nov 25 04:45:19 np0005534696 systemd[1]: Listening on libvirt secret daemon socket.
Nov 25 04:45:19 np0005534696 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 25 04:45:19 np0005534696 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 25 04:45:19 np0005534696 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 25 04:45:19 np0005534696 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 25 04:45:19 np0005534696 systemd[1]: Starting libvirt secret daemon...
Nov 25 04:45:19 np0005534696 systemd[1]: Started libvirt secret daemon.
Nov 25 04:45:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005aa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:45:20 np0005534696 python3.9[193150]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:20.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:20 np0005534696 python3.9[193327]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 04:45:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:20 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:21.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:21 np0005534696 python3.9[193480]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:45:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:21 np0005534696 python3.9[193634]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 04:45:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:22.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:22 np0005534696 python3.9[193785]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:22 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:22 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:45:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:22 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:45:23 np0005534696 python3.9[193906]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063922.2456193-3361-266760156922418/.source.xml follow=False _original_basename=secret.xml.j2 checksum=ee7fcb172a9e9a6851069e0487499aace39188fe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:23.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:23 np0005534696 python3.9[194059]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine af1c9ae3-08d7-5547-a53d-2cccf7c6ef90#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:45:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:24.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:24 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:25.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:25 np0005534696 python3.9[194223]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:45:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:26.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:26 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:27 np0005534696 python3.9[194687]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:27.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:27 np0005534696 python3.9[194840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:28 np0005534696 python3.9[194964]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063927.3491442-3527-59179216043810/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:28.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:28 np0005534696 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 25 04:45:28 np0005534696 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 25 04:45:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:28 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:28 np0005534696 python3.9[195116]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:29.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:29 np0005534696 python3.9[195269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:29 np0005534696 python3.9[195347]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:30 np0005534696 python3.9[195500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:30.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:30 np0005534696 python3.9[195578]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.okq_bcue recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:30 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:31 np0005534696 python3.9[195730]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:31 np0005534696 auditd[672]: Audit daemon rotating log files
Nov 25 04:45:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:31.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:31 np0005534696 python3.9[195809]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:31 np0005534696 python3.9[195962]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:45:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:32.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094532 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:45:32 np0005534696 python3[196115]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 04:45:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:32 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:33.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:33 np0005534696 python3.9[196268]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005ae0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:33 np0005534696 python3.9[196346]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:34 np0005534696 python3.9[196499]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:34.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:34 np0005534696 python3.9[196577]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:34 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:34 np0005534696 python3.9[196729]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:35.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:35 np0005534696 python3.9[196808]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005b00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:35 np0005534696 python3.9[196960]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:36 np0005534696 python3.9[197039]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:36.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:36 np0005534696 podman[197163]: 2025-11-25 09:45:36.683665647 +0000 UTC m=+0.041474981 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 04:45:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:36 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:36 np0005534696 python3.9[197208]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:37.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:37 np0005534696 python3.9[197334]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764063936.3792508-3902-13379729697675/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:37 np0005534696 python3.9[197486]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:38 np0005534696 python3.9[197639]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:45:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:38.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:38 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005b20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:39 np0005534696 python3.9[197794]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:39.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:39 np0005534696 python3.9[197947]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:45:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:40 np0005534696 python3.9[198101]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:45:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:40.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:40 np0005534696 python3.9[198280]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:45:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:40 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:41.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:41 np0005534696 python3.9[198436]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005b40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:42 np0005534696 python3.9[198589]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:42.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:42 np0005534696 python3.9[198712]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063941.801314-4118-990653703388/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:42 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd70007ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:43 np0005534696 python3.9[198866]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:43.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:43 np0005534696 python3.9[198990]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063942.7227926-4163-125878007089446/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:44 np0005534696 python3.9[199143]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:45:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:44.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:44 np0005534696 python3.9[199266]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063943.7124996-4208-250689122975913/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:45:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:44 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:45 np0005534696 podman[199390]: 2025-11-25 09:45:45.085239593 +0000 UTC m=+0.062519715 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:45:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:45.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:45 np0005534696 python3.9[199435]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:45:45 np0005534696 systemd[1]: Reloading.
Nov 25 04:45:45 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:45:45 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:45:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:45 np0005534696 systemd[1]: Reached target edpm_libvirt.target.
Nov 25 04:45:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:46 np0005534696 python3.9[199634]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 04:45:46 np0005534696 systemd[1]: Reloading.
Nov 25 04:45:46 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:45:46 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:45:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:46.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:46 np0005534696 systemd[1]: Reloading.
Nov 25 04:45:46 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:45:46 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:45:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:46 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005b80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:47 np0005534696 systemd[1]: session-51.scope: Deactivated successfully.
Nov 25 04:45:47 np0005534696 systemd[1]: session-51.scope: Consumed 2min 26.641s CPU time.
Nov 25 04:45:47 np0005534696 systemd-logind[744]: Session 51 logged out. Waiting for processes to exit.
Nov 25 04:45:47 np0005534696 systemd-logind[744]: Removed session 51.
Nov 25 04:45:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:47.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:48.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:48 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:49.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005d20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:50.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:50 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:51.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:52.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:52 np0005534696 systemd-logind[744]: New session 52 of user zuul.
Nov 25 04:45:52 np0005534696 systemd[1]: Started Session 52 of User zuul.
Nov 25 04:45:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:52 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005d40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:53.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:53 np0005534696 python3.9[199890]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:45:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:54.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:54 np0005534696 python3.9[200045]: ansible-ansible.builtin.service_facts Invoked
Nov 25 04:45:54 np0005534696 network[200062]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 04:45:54 np0005534696 network[200063]: 'network-scripts' will be removed from distribution in near future.
Nov 25 04:45:54 np0005534696 network[200064]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 04:45:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:54 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:55.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:45:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:56.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:56 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:57.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:58 np0005534696 python3.9[200340]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 04:45:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:45:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:45:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:45:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:58 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:59 np0005534696 python3.9[200424]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:45:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:45:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:45:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:45:59.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:45:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:45:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:45:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:45:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:45:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:45:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:00.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:00 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:46:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:01.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:46:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:02.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:02 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74002660 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:03.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:03 np0005534696 python3.9[200609]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:46:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:04.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:04 np0005534696 python3.9[200762]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:46:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:04 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:05 np0005534696 python3.9[200915]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:46:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:05.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:46:05.339 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:46:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:46:05.340 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:46:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:46:05.340 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:46:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:05 np0005534696 python3.9[201068]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:46:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:06 np0005534696 python3.9[201222]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:06.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:06 np0005534696 python3.9[201345]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063965.7607458-248-130329368325012/.source.iscsi _original_basename=.6bbpk9j2 follow=False checksum=1830396fd043a027ac967c8485d3ea6ee575d21e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:06 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:07 np0005534696 podman[201469]: 2025-11-25 09:46:07.030161424 +0000 UTC m=+0.040507146 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:46:07 np0005534696 python3.9[201513]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:07.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74003230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:07 np0005534696 python3.9[201666]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:08.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:08 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:08 np0005534696 python3.9[201819]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:46:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:08 np0005534696 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 25 04:46:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:46:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:09.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:46:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:09 np0005534696 python3.9[201976]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:46:09 np0005534696 systemd[1]: Reloading.
Nov 25 04:46:09 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:46:09 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:46:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:09 np0005534696 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 04:46:09 np0005534696 systemd[1]: Starting Open-iSCSI...
Nov 25 04:46:09 np0005534696 kernel: Loading iSCSI transport class v2.0-870.
Nov 25 04:46:09 np0005534696 systemd[1]: Started Open-iSCSI.
Nov 25 04:46:09 np0005534696 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 25 04:46:09 np0005534696 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 25 04:46:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:10.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:10 np0005534696 python3.9[202177]: ansible-ansible.builtin.service_facts Invoked
Nov 25 04:46:10 np0005534696 network[202194]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 04:46:10 np0005534696 network[202195]: 'network-scripts' will be removed from distribution in near future.
Nov 25 04:46:10 np0005534696 network[202196]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 04:46:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:10 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd740033d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:11.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005f40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:12.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:12 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:13.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74004540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:14.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:14 np0005534696 python3.9[202472]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 04:46:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:14 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:15 np0005534696 python3.9[202624]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 25 04:46:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:15.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:15 np0005534696 podman[202706]: 2025-11-25 09:46:15.366261648 +0000 UTC m=+0.071556842 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:46:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:15 np0005534696 python3.9[202804]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:15 np0005534696 python3.9[202928]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063975.2308073-479-220779005168908/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:16.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:16 np0005534696 podman[203142]: 2025-11-25 09:46:16.440735681 +0000 UTC m=+0.050494946 container exec 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:46:16 np0005534696 podman[203204]: 2025-11-25 09:46:16.582720122 +0000 UTC m=+0.047565482 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 04:46:16 np0005534696 podman[203142]: 2025-11-25 09:46:16.585837069 +0000 UTC m=+0.195596324 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:46:16 np0005534696 python3.9[203203]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:16 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:16 np0005534696 podman[203375]: 2025-11-25 09:46:16.951258369 +0000 UTC m=+0.036798234 container exec 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:46:16 np0005534696 podman[203375]: 2025-11-25 09:46:16.956160521 +0000 UTC m=+0.041700365 container exec_died 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:46:17 np0005534696 podman[203452]: 2025-11-25 09:46:17.160936989 +0000 UTC m=+0.040245542 container exec 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:46:17 np0005534696 podman[203452]: 2025-11-25 09:46:17.176824852 +0000 UTC m=+0.056133385 container exec_died 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:46:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:17.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:17 np0005534696 podman[203577]: 2025-11-25 09:46:17.319927502 +0000 UTC m=+0.039171107 container exec 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, build-date=2023-02-22T09:23:20, release=1793, io.buildah.version=1.28.2, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.openshift.expose-services=)
Nov 25 04:46:17 np0005534696 podman[203577]: 2025-11-25 09:46:17.326769772 +0000 UTC m=+0.046013357 container exec_died 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, vendor=Red Hat, Inc., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, name=keepalived, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 25 04:46:17 np0005534696 podman[203617]: 2025-11-25 09:46:17.431865892 +0000 UTC m=+0.035449611 container exec e9b26ee0cdfd1574982440200acae1b90f9cb988aa79eb6267e87f95f3cd119b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 25 04:46:17 np0005534696 podman[203617]: 2025-11-25 09:46:17.443863269 +0000 UTC m=+0.047446989 container exec_died e9b26ee0cdfd1574982440200acae1b90f9cb988aa79eb6267e87f95f3cd119b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:46:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:17 np0005534696 python3.9[203576]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:46:17 np0005534696 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 04:46:17 np0005534696 systemd[1]: Stopped Load Kernel Modules.
Nov 25 04:46:17 np0005534696 systemd[1]: Stopping Load Kernel Modules...
Nov 25 04:46:17 np0005534696 systemd[1]: Starting Load Kernel Modules...
Nov 25 04:46:17 np0005534696 systemd[1]: Finished Load Kernel Modules.
Nov 25 04:46:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:18 np0005534696 python3.9[203889]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:46:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:18.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:46:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:46:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:46:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:46:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:46:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:46:18 np0005534696 python3.9[204057]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:46:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:18 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74004540 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:19.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:19 np0005534696 python3.9[204210]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:46:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:19 np0005534696 python3.9[204362]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:20 np0005534696 python3.9[204486]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063979.518822-653-208604872440258/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:20.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:20 np0005534696 python3.9[204663]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:46:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:20 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:21.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:21 np0005534696 python3.9[204817]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74005250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:21 np0005534696 python3.9[204995]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:22 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:46:22 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:46:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:22.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:22 np0005534696 python3.9[205147]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:22 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:23 np0005534696 python3.9[205299]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:46:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:23.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:46:23 np0005534696 python3.9[205452]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74005250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:23 np0005534696 python3.9[205605]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:24 np0005534696 python3.9[205757]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:24.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:24 np0005534696 python3.9[205909]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:46:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:24 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64005fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:25.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:25 np0005534696 python3.9[206064]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:25 np0005534696 python3.9[206217]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:46:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:26.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:26 np0005534696 python3.9[206369]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:26 np0005534696 python3.9[206447]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:46:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:26 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74005250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:27 np0005534696 python3.9[206600]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:27.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:27 np0005534696 python3.9[206678]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:46:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:28 np0005534696 python3.9[206831]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:28.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:28 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:28 np0005534696 python3.9[206983]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:29 np0005534696 python3.9[207062]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:29.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74005250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:29 np0005534696 python3.9[207214]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:30 np0005534696 python3.9[207293]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:46:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:30.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:46:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:30 np0005534696 python3.9[207445]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:46:30 np0005534696 systemd[1]: Reloading.
Nov 25 04:46:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:30 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd680c2850 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:30 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:46:30 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:46:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:31.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:31 np0005534696 python3.9[207637]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74005250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:32 np0005534696 python3.9[207716]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:32.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:32 np0005534696 python3.9[207868]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:32 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:33 np0005534696 python3.9[207949]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:46:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:33.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:46:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c002320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:33 np0005534696 python3.9[208102]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:46:33 np0005534696 systemd[1]: Reloading.
Nov 25 04:46:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:33 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:46:33 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:46:34 np0005534696 systemd[1]: Starting Create netns directory...
Nov 25 04:46:34 np0005534696 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 04:46:34 np0005534696 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 04:46:34 np0005534696 systemd[1]: Finished Create netns directory.
Nov 25 04:46:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:34.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:34 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74005250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:34 np0005534696 python3.9[208297]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:46:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:35.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:35 np0005534696 python3.9[208450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:35 np0005534696 python3.9[208574]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764063995.132806-1274-71715700920986/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:46:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:46:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:36.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:46:36 np0005534696 python3.9[208726]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:46:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:36 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c002320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:37 np0005534696 podman[208851]: 2025-11-25 09:46:37.232188474 +0000 UTC m=+0.042337132 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 04:46:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:46:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:37.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:46:37 np0005534696 python3.9[208894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74005250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:37 np0005534696 python3.9[209017]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063997.021436-1348-216593104938360/.source.json _original_basename=.9prnpmxk follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:38 np0005534696 python3.9[209170]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:38.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:38 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:39.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:40 np0005534696 python3.9[209599]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 25 04:46:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:40.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094640 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:46:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:40 np0005534696 python3.9[209776]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 04:46:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:40 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74005250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:41.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:41 np0005534696 python3.9[209929]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 04:46:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:42.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:42 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:46:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:43.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:46:43 np0005534696 python3[210102]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 04:46:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd74005250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44005170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:44.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:44 np0005534696 podman[210113]: 2025-11-25 09:46:44.533304681 +0000 UTC m=+1.087429652 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 04:46:44 np0005534696 podman[210161]: 2025-11-25 09:46:44.624303548 +0000 UTC m=+0.027923906 container create 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 04:46:44 np0005534696 podman[210161]: 2025-11-25 09:46:44.611084039 +0000 UTC m=+0.014704407 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 04:46:44 np0005534696 python3[210102]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24
Nov 25 04:46:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:44 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:45 np0005534696 python3.9[210341]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:46:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:45.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:45 np0005534696 podman[210468]: 2025-11-25 09:46:45.871975323 +0000 UTC m=+0.058891858 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 04:46:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:46 np0005534696 python3.9[210513]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:46 np0005534696 python3.9[210596]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:46:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:46.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:46 np0005534696 python3.9[210747]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764064006.3719792-1612-210674351573232/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:46 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:47 np0005534696 python3.9[210823]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 04:46:47 np0005534696 systemd[1]: Reloading.
Nov 25 04:46:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:46:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:47.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:46:47 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:46:47 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:46:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:47 np0005534696 python3.9[210935]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:46:47 np0005534696 systemd[1]: Reloading.
Nov 25 04:46:48 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:46:48 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:46:48 np0005534696 systemd[1]: Starting multipathd container...
Nov 25 04:46:48 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:46:48 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7cc4b8901034cd1836b9586267099df9531ed5fccf3bb6873a3148807f0cf12/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 04:46:48 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7cc4b8901034cd1836b9586267099df9531ed5fccf3bb6873a3148807f0cf12/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 04:46:48 np0005534696 systemd[1]: Started /usr/bin/podman healthcheck run 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d.
Nov 25 04:46:48 np0005534696 podman[210976]: 2025-11-25 09:46:48.321529657 +0000 UTC m=+0.077067728 container init 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:46:48 np0005534696 multipathd[210988]: + sudo -E kolla_set_configs
Nov 25 04:46:48 np0005534696 podman[210976]: 2025-11-25 09:46:48.350986962 +0000 UTC m=+0.106525043 container start 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:46:48 np0005534696 podman[210976]: multipathd
Nov 25 04:46:48 np0005534696 systemd[1]: Started multipathd container.
Nov 25 04:46:48 np0005534696 multipathd[210988]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 04:46:48 np0005534696 multipathd[210988]: INFO:__main__:Validating config file
Nov 25 04:46:48 np0005534696 multipathd[210988]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 04:46:48 np0005534696 multipathd[210988]: INFO:__main__:Writing out command to execute
Nov 25 04:46:48 np0005534696 multipathd[210988]: ++ cat /run_command
Nov 25 04:46:48 np0005534696 multipathd[210988]: + CMD='/usr/sbin/multipathd -d'
Nov 25 04:46:48 np0005534696 multipathd[210988]: + ARGS=
Nov 25 04:46:48 np0005534696 multipathd[210988]: + sudo kolla_copy_cacerts
Nov 25 04:46:48 np0005534696 multipathd[210988]: Running command: '/usr/sbin/multipathd -d'
Nov 25 04:46:48 np0005534696 multipathd[210988]: + [[ ! -n '' ]]
Nov 25 04:46:48 np0005534696 multipathd[210988]: + . kolla_extend_start
Nov 25 04:46:48 np0005534696 multipathd[210988]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 04:46:48 np0005534696 multipathd[210988]: + umask 0022
Nov 25 04:46:48 np0005534696 multipathd[210988]: + exec /usr/sbin/multipathd -d
Nov 25 04:46:48 np0005534696 podman[210995]: 2025-11-25 09:46:48.409164984 +0000 UTC m=+0.059000914 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:46:48 np0005534696 systemd[1]: 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d-61bb6dd5f2f6e43c.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 04:46:48 np0005534696 systemd[1]: 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d-61bb6dd5f2f6e43c.service: Failed with result 'exit-code'.
Nov 25 04:46:48 np0005534696 multipathd[210988]: 2789.884311 | --------start up--------
Nov 25 04:46:48 np0005534696 multipathd[210988]: 2789.884325 | read /etc/multipath.conf
Nov 25 04:46:48 np0005534696 multipathd[210988]: 2789.888759 | path checkers start up
Nov 25 04:46:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:46:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:46:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:48 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:48 np0005534696 python3.9[211174]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:46:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:49.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:49 np0005534696 python3.9[211329]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:46:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:46:50 np0005534696 python3.9[211491]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:46:50 np0005534696 systemd[1]: Stopping multipathd container...
Nov 25 04:46:50 np0005534696 multipathd[210988]: 2791.621399 | exit (signal)
Nov 25 04:46:50 np0005534696 multipathd[210988]: 2791.621711 | --------shut down-------
Nov 25 04:46:50 np0005534696 systemd[1]: libpod-0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d.scope: Deactivated successfully.
Nov 25 04:46:50 np0005534696 conmon[210988]: conmon 0ef242e8ddb34cae11c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d.scope/container/memory.events
Nov 25 04:46:50 np0005534696 podman[211495]: 2025-11-25 09:46:50.18447945 +0000 UTC m=+0.056870550 container died 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:46:50 np0005534696 systemd[1]: 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d-61bb6dd5f2f6e43c.timer: Deactivated successfully.
Nov 25 04:46:50 np0005534696 systemd[1]: Stopped /usr/bin/podman healthcheck run 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d.
Nov 25 04:46:50 np0005534696 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d-userdata-shm.mount: Deactivated successfully.
Nov 25 04:46:50 np0005534696 systemd[1]: var-lib-containers-storage-overlay-b7cc4b8901034cd1836b9586267099df9531ed5fccf3bb6873a3148807f0cf12-merged.mount: Deactivated successfully.
Nov 25 04:46:50 np0005534696 podman[211495]: 2025-11-25 09:46:50.327450481 +0000 UTC m=+0.199841580 container cleanup 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 04:46:50 np0005534696 podman[211495]: multipathd
Nov 25 04:46:50 np0005534696 podman[211517]: multipathd
Nov 25 04:46:50 np0005534696 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 25 04:46:50 np0005534696 systemd[1]: Stopped multipathd container.
Nov 25 04:46:50 np0005534696 systemd[1]: Starting multipathd container...
Nov 25 04:46:50 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:46:50 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7cc4b8901034cd1836b9586267099df9531ed5fccf3bb6873a3148807f0cf12/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 04:46:50 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7cc4b8901034cd1836b9586267099df9531ed5fccf3bb6873a3148807f0cf12/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 04:46:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:50.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:50 np0005534696 systemd[1]: Started /usr/bin/podman healthcheck run 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d.
Nov 25 04:46:50 np0005534696 podman[211527]: 2025-11-25 09:46:50.464619945 +0000 UTC m=+0.075662279 container init 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 25 04:46:50 np0005534696 multipathd[211539]: + sudo -E kolla_set_configs
Nov 25 04:46:50 np0005534696 podman[211527]: multipathd
Nov 25 04:46:50 np0005534696 podman[211527]: 2025-11-25 09:46:50.487584943 +0000 UTC m=+0.098627258 container start 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 04:46:50 np0005534696 systemd[1]: Started multipathd container.
Nov 25 04:46:50 np0005534696 multipathd[211539]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 04:46:50 np0005534696 multipathd[211539]: INFO:__main__:Validating config file
Nov 25 04:46:50 np0005534696 multipathd[211539]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 04:46:50 np0005534696 multipathd[211539]: INFO:__main__:Writing out command to execute
Nov 25 04:46:50 np0005534696 multipathd[211539]: ++ cat /run_command
Nov 25 04:46:50 np0005534696 multipathd[211539]: + CMD='/usr/sbin/multipathd -d'
Nov 25 04:46:50 np0005534696 multipathd[211539]: + ARGS=
Nov 25 04:46:50 np0005534696 multipathd[211539]: + sudo kolla_copy_cacerts
Nov 25 04:46:50 np0005534696 multipathd[211539]: + [[ ! -n '' ]]
Nov 25 04:46:50 np0005534696 multipathd[211539]: + . kolla_extend_start
Nov 25 04:46:50 np0005534696 multipathd[211539]: Running command: '/usr/sbin/multipathd -d'
Nov 25 04:46:50 np0005534696 multipathd[211539]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 04:46:50 np0005534696 multipathd[211539]: + umask 0022
Nov 25 04:46:50 np0005534696 multipathd[211539]: + exec /usr/sbin/multipathd -d
Nov 25 04:46:50 np0005534696 podman[211546]: 2025-11-25 09:46:50.545608914 +0000 UTC m=+0.048688374 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd)
Nov 25 04:46:50 np0005534696 systemd[1]: 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d-5fefdb6eb0c0049c.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 04:46:50 np0005534696 systemd[1]: 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d-5fefdb6eb0c0049c.service: Failed with result 'exit-code'.
Nov 25 04:46:50 np0005534696 multipathd[211539]: 2792.023392 | --------start up--------
Nov 25 04:46:50 np0005534696 multipathd[211539]: 2792.023403 | read /etc/multipath.conf
Nov 25 04:46:50 np0005534696 multipathd[211539]: 2792.027206 | path checkers start up
Nov 25 04:46:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:50 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:51 np0005534696 python3.9[211727]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:51.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:51 np0005534696 python3.9[211881]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 04:46:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:52.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:52 np0005534696 python3.9[212033]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 25 04:46:52 np0005534696 kernel: Key type psk registered
Nov 25 04:46:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:52 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:52 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:46:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:52 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:46:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:52 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:46:53 np0005534696 python3.9[212196]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:46:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:46:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:53.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:46:53 np0005534696 python3.9[212320]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764064012.681202-1853-113269685429663/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:54 np0005534696 python3.9[212473]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:46:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:54.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:54 np0005534696 python3.9[212625]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:46:54 np0005534696 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 04:46:54 np0005534696 systemd[1]: Stopped Load Kernel Modules.
Nov 25 04:46:54 np0005534696 systemd[1]: Stopping Load Kernel Modules...
Nov 25 04:46:54 np0005534696 systemd[1]: Starting Load Kernel Modules...
Nov 25 04:46:54 np0005534696 systemd[1]: Finished Load Kernel Modules.
Nov 25 04:46:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:54 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:55.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:55 np0005534696 python3.9[212782]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 04:46:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:46:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:46:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:56.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:56 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd7400d9a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:46:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:57.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:46:57 np0005534696 systemd[1]: Reloading.
Nov 25 04:46:57 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:46:57 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:46:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd640060c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:57 np0005534696 systemd[1]: Reloading.
Nov 25 04:46:57 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:46:57 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:46:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:57 np0005534696 systemd-logind[744]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 04:46:58 np0005534696 systemd-logind[744]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 04:46:58 np0005534696 lvm[212893]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 04:46:58 np0005534696 lvm[212893]: VG ceph_vg0 finished
Nov 25 04:46:58 np0005534696 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 04:46:58 np0005534696 systemd[1]: Starting man-db-cache-update.service...
Nov 25 04:46:58 np0005534696 systemd[1]: Reloading.
Nov 25 04:46:58 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:46:58 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:46:58 np0005534696 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 04:46:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:46:58.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:58 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:59 np0005534696 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 04:46:59 np0005534696 systemd[1]: Finished man-db-cache-update.service.
Nov 25 04:46:59 np0005534696 systemd[1]: man-db-cache-update.service: Consumed 1.096s CPU time.
Nov 25 04:46:59 np0005534696 systemd[1]: run-re058e12d6fb14529b53732a20642c73c.service: Deactivated successfully.
Nov 25 04:46:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:46:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:46:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:46:59.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:46:59 np0005534696 python3.9[214255]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:46:59 np0005534696 systemd[1]: Stopping Open-iSCSI...
Nov 25 04:46:59 np0005534696 iscsid[202017]: iscsid shutting down.
Nov 25 04:46:59 np0005534696 systemd[1]: iscsid.service: Deactivated successfully.
Nov 25 04:46:59 np0005534696 systemd[1]: Stopped Open-iSCSI.
Nov 25 04:46:59 np0005534696 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 04:46:59 np0005534696 systemd[1]: Starting Open-iSCSI...
Nov 25 04:46:59 np0005534696 systemd[1]: Started Open-iSCSI.
Nov 25 04:46:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd7400d9a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:46:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd640060e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:46:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:46:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:46:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:46:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:00 np0005534696 python3.9[214410]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 04:47:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:00.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:00 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:00 np0005534696 python3.9[214591]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:01.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c019820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd7400d9a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:01 np0005534696 python3.9[214744]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 04:47:01 np0005534696 systemd[1]: Reloading.
Nov 25 04:47:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:01 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:47:01 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:47:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:02.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:02 np0005534696 python3.9[214930]: ansible-ansible.builtin.service_facts Invoked
Nov 25 04:47:02 np0005534696 network[214947]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 04:47:02 np0005534696 network[214948]: 'network-scripts' will be removed from distribution in near future.
Nov 25 04:47:02 np0005534696 network[214949]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 04:47:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094702 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:47:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:02 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094703 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:47:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:47:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:03.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:47:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd8c019820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:04.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:04 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd7400db40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:05.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:47:05.341 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:47:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:47:05.341 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:47:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:47:05.341 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:47:05 np0005534696 python3.9[215228]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:47:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900206a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:06 np0005534696 python3.9[215382]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:47:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:06.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:06 np0005534696 python3.9[215535]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:47:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:06 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:07 np0005534696 python3.9[215688]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:47:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:07.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:07 np0005534696 podman[215715]: 2025-11-25 09:47:07.359251704 +0000 UTC m=+0.068708572 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 04:47:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd7400db40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:07 np0005534696 python3.9[215858]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:47:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:08 np0005534696 python3.9[216012]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:47:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:08.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:08 np0005534696 python3.9[216165]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:47:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:08 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900206a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:09.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:09 np0005534696 python3.9[216319]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:47:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd7400db60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:10 np0005534696 python3.9[216473]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:10.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:10 np0005534696 python3.9[216625]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:10 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:11 np0005534696 python3.9[216779]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:11.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9004b880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:11 np0005534696 python3.9[216931]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:47:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9004b880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:12 np0005534696 python3.9[217084]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:12.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:12 np0005534696 python3.9[217236]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:12 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9004b880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:13 np0005534696 python3.9[217388]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:13.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:13 np0005534696 python3.9[217541]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:13 np0005534696 python3.9[217694]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:14 np0005534696 python3.9[217846]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:14.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:14 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:47:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:14 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:47:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:14 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:47:14 np0005534696 python3.9[217998]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:14 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:15 np0005534696 python3.9[218151]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:15.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9004b880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:15 np0005534696 python3.9[218303]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:16 np0005534696 podman[218428]: 2025-11-25 09:47:16.044215217 +0000 UTC m=+0.060252242 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:47:16 np0005534696 python3.9[218473]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:16 np0005534696 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 25 04:47:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:16.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:16 np0005534696 python3.9[218632]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:16 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:17 np0005534696 python3.9[218784]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:17.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:17 np0005534696 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 04:47:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:17 np0005534696 python3.9[218938]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:47:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:47:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9004b880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:18 np0005534696 python3.9[219091]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 04:47:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:18.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:18 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:19 np0005534696 python3.9[219243]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 04:47:19 np0005534696 systemd[1]: Reloading.
Nov 25 04:47:19 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:47:19 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:47:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:19.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd64006120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:19 np0005534696 python3.9[219432]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:47:20 np0005534696 python3.9[219585]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:47:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:20.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:20 np0005534696 podman[219733]: 2025-11-25 09:47:20.685184938 +0000 UTC m=+0.043759074 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:47:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:20 np0005534696 python3.9[219780]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:47:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:20 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9004b880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:21 np0005534696 python3.9[219934]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:47:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:21.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9004b880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:21 np0005534696 python3.9[220138]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:47:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9004b880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:22 np0005534696 python3.9[220320]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:47:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:22.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:22 np0005534696 python3.9[220473]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:47:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:22 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98002e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:23 np0005534696 python3.9[220626]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 04:47:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094723 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:47:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:23.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9004b880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:23 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:47:23 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:47:23 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:47:23 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:47:23 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:47:23 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:47:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:24.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:24 np0005534696 python3.9[220781]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:24 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9004b880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:25 np0005534696 python3.9[220933]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:25.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd980039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:25 np0005534696 python3.9[221086]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:26 np0005534696 python3.9[221239]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:26.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:26 np0005534696 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 25 04:47:26 np0005534696 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 04:47:26 np0005534696 python3.9[221391]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094726 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:47:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:26 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:27 np0005534696 python3.9[221570]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:27.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:27 np0005534696 python3.9[221723]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9004b880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:27 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:47:27 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:47:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd980039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:27 np0005534696 python3.9[221876]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:28 np0005534696 python3.9[222028]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:28.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:28 np0005534696 python3.9[222180]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:28 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:29.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:30.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:30 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:47:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:31.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:47:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:32.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:32 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd980039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:33.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:33 np0005534696 python3.9[222337]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 25 04:47:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:34 np0005534696 python3.9[222491]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 04:47:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:34.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:34 np0005534696 python3.9[222649]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 04:47:34 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:47:34 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:47:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:34 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:35.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:35 np0005534696 systemd-logind[744]: New session 53 of user zuul.
Nov 25 04:47:35 np0005534696 systemd[1]: Started Session 53 of User zuul.
Nov 25 04:47:35 np0005534696 systemd[1]: session-53.scope: Deactivated successfully.
Nov 25 04:47:35 np0005534696 systemd-logind[744]: Session 53 logged out. Waiting for processes to exit.
Nov 25 04:47:35 np0005534696 systemd-logind[744]: Removed session 53.
Nov 25 04:47:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:47:36 np0005534696 python3.9[222838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:47:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:36.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:36 np0005534696 python3.9[222959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064056.019397-3436-280705888146163/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:36 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:37 np0005534696 python3.9[223109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:47:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:37.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:37 np0005534696 python3.9[223186]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:37 np0005534696 podman[223187]: 2025-11-25 09:47:37.494589595 +0000 UTC m=+0.042797352 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 04:47:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:37 np0005534696 python3.9[223353]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:47:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:38 np0005534696 python3.9[223475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064057.5525-3436-73425927133123/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:38.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:38 np0005534696 python3.9[223625]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:47:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:38 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:38 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:47:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:38 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:47:39 np0005534696 python3.9[223746]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064058.3513613-3436-142075493372122/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d01cc1b48d783e4ed08d12bb4d0a107aba230a69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:39.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:39 np0005534696 python3.9[223897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:47:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:39 np0005534696 python3.9[224019]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064059.2137866-3436-52910356833012/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:40 np0005534696 python3.9[224169]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:47:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:40.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:40 np0005534696 python3.9[224290]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064060.0090227-3436-248738913423275/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:40 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:41.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:41 np0005534696 python3.9[224468]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:41 np0005534696 python3.9[224621]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:47:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:47:42 np0005534696 python3.9[224773]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:47:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:42.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:42 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:42 np0005534696 python3.9[224925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:47:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:43.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:43 np0005534696 python3.9[225049]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764064062.6710083-3758-136842626485654/.source _original_basename=.z50z0eme follow=False checksum=a7e46f10ceac5b07178a7214eb0184a63cb77e95 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 25 04:47:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:43 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:44 np0005534696 python3.9[225202]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:47:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:44.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:44 np0005534696 python3.9[225354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:47:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:44 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:45 np0005534696 python3.9[225475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064064.3697972-3835-34784904629860/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=4c77b2c041a7564aa2c84115117dc8517e9bb9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:45.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:45 np0005534696 python3.9[225626]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 04:47:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:45 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:46 np0005534696 python3.9[225748]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764064065.2947786-3880-253625346435693/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=941d5739094d046b86479403aeaaf0441b82ba11 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 04:47:46 np0005534696 podman[225773]: 2025-11-25 09:47:46.352273468 +0000 UTC m=+0.062852506 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:47:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:46.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:46 np0005534696 python3.9[225923]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 25 04:47:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:46 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:47.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:47 np0005534696 python3.9[226076]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 04:47:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:47 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:48 np0005534696 python3[226229]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 04:47:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:48.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094748 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:47:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:48 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:49.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:49 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:50.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:50 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98004e20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:51 np0005534696 podman[226264]: 2025-11-25 09:47:51.331923223 +0000 UTC m=+0.041040721 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=multipathd)
Nov 25 04:47:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:51.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:51 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:52.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:52 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd900293c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:53.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:53 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:54.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094754 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:47:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:54 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:55.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9c004fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:47:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:55 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:56.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:56 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:57.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:57 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9c005ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:47:58.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:58 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:59 np0005534696 podman[226240]: 2025-11-25 09:47:59.303400921 +0000 UTC m=+11.099224482 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 04:47:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:47:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:47:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:47:59.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:47:59 np0005534696 podman[226334]: 2025-11-25 09:47:59.412876197 +0000 UTC m=+0.033053645 container create 90bb4dc37a00a75684b7147243940103c2d11e384fa7ca8d98bc6c4098ec188c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 04:47:59 np0005534696 podman[226334]: 2025-11-25 09:47:59.396900244 +0000 UTC m=+0.017077702 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 04:47:59 np0005534696 python3[226229]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 25 04:47:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd98005f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:47:59 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:47:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:47:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:47:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:47:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:00.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:00 np0005534696 python3.9[226517]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:48:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:00 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:01.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4004e00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:01 np0005534696 python3.9[226697]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 25 04:48:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:01 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9c005ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:02.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:02 np0005534696 python3.9[226850]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 04:48:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:02 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:48:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:03.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:48:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:03 np0005534696 python3[227003]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 04:48:03 np0005534696 podman[227032]: 2025-11-25 09:48:03.862929051 +0000 UTC m=+0.031170967 container create a756761bd5b8f5faad5740c43a4a76adddbbbc3f0b0743457b7bec3a64ebd079 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, container_name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 25 04:48:03 np0005534696 podman[227032]: 2025-11-25 09:48:03.848790069 +0000 UTC m=+0.017032005 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076
Nov 25 04:48:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:03 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4005940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:03 np0005534696 python3[227003]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076 kolla_start
Nov 25 04:48:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:04 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:48:04 np0005534696 python3.9[227211]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:48:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:04.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:04 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9c005ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:48:05.342 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:48:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:48:05.342 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:48:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:48:05.342 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:48:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:05 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:05 np0005534696 python3.9[227365]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:48:06 np0005534696 python3.9[227518]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764064086.0292587-4156-280468141727524/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 04:48:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:06.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:06 np0005534696 python3.9[227594]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 04:48:06 np0005534696 systemd[1]: Reloading.
Nov 25 04:48:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:06 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4005940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:07 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:48:07 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:48:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:48:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:48:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd9c006f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:07 np0005534696 python3.9[227706]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 04:48:07 np0005534696 systemd[1]: Reloading.
Nov 25 04:48:07 np0005534696 podman[227708]: 2025-11-25 09:48:07.7612922 +0000 UTC m=+0.053625273 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 04:48:07 np0005534696 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 04:48:07 np0005534696 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 04:48:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:07 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:08 np0005534696 systemd[1]: Starting nova_compute container...
Nov 25 04:48:08 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:48:08 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7529e8981d64fa12493ad77823f6459932fd2251014e8eefe76f36023bb201a/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:08 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7529e8981d64fa12493ad77823f6459932fd2251014e8eefe76f36023bb201a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:08 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7529e8981d64fa12493ad77823f6459932fd2251014e8eefe76f36023bb201a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:08 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7529e8981d64fa12493ad77823f6459932fd2251014e8eefe76f36023bb201a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:08 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7529e8981d64fa12493ad77823f6459932fd2251014e8eefe76f36023bb201a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:08 np0005534696 podman[227764]: 2025-11-25 09:48:08.109663045 +0000 UTC m=+0.072956091 container init a756761bd5b8f5faad5740c43a4a76adddbbbc3f0b0743457b7bec3a64ebd079 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 04:48:08 np0005534696 podman[227764]: 2025-11-25 09:48:08.114709779 +0000 UTC m=+0.078002814 container start a756761bd5b8f5faad5740c43a4a76adddbbbc3f0b0743457b7bec3a64ebd079 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 04:48:08 np0005534696 podman[227764]: nova_compute
Nov 25 04:48:08 np0005534696 nova_compute[227776]: + sudo -E kolla_set_configs
Nov 25 04:48:08 np0005534696 systemd[1]: Started nova_compute container.
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Validating config file
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying service configuration files
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Deleting /etc/ceph
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Creating directory /etc/ceph
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Writing out command to execute
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 04:48:08 np0005534696 nova_compute[227776]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 04:48:08 np0005534696 nova_compute[227776]: ++ cat /run_command
Nov 25 04:48:08 np0005534696 nova_compute[227776]: + CMD=nova-compute
Nov 25 04:48:08 np0005534696 nova_compute[227776]: + ARGS=
Nov 25 04:48:08 np0005534696 nova_compute[227776]: + sudo kolla_copy_cacerts
Nov 25 04:48:08 np0005534696 nova_compute[227776]: + [[ ! -n '' ]]
Nov 25 04:48:08 np0005534696 nova_compute[227776]: + . kolla_extend_start
Nov 25 04:48:08 np0005534696 nova_compute[227776]: Running command: 'nova-compute'
Nov 25 04:48:08 np0005534696 nova_compute[227776]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 04:48:08 np0005534696 nova_compute[227776]: + umask 0022
Nov 25 04:48:08 np0005534696 nova_compute[227776]: + exec nova-compute
Nov 25 04:48:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:08.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:08 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:09 np0005534696 python3.9[227938]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:48:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4005940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:09 np0005534696 python3.9[228089]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:48:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:09 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4005940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.125 227780 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.126 227780 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.126 227780 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.126 227780 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 25 04:48:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:10 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.255 227780 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.271 227780 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.272 227780 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 25 04:48:10 np0005534696 python3.9[228244]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 04:48:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:10.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.664 227780 INFO nova.virt.driver [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.751 227780 INFO nova.compute.provider_config [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.772 227780 DEBUG oslo_concurrency.lockutils [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.772 227780 DEBUG oslo_concurrency.lockutils [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.773 227780 DEBUG oslo_concurrency.lockutils [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.773 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.773 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.773 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.773 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.774 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.774 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.774 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.774 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.774 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.775 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.775 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.775 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.775 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.775 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.775 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.776 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.776 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.776 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.776 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.776 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.777 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.777 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.777 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.777 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.777 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.777 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.778 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.778 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.778 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.778 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.778 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.779 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.779 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.779 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.779 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.779 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.780 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.780 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.780 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.780 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.780 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.781 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.781 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.781 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.781 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.782 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.782 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.782 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.782 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.782 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.782 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.783 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.783 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.783 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.783 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.783 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.784 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.784 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.784 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.784 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.784 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.785 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.785 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.785 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.785 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.785 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.785 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.786 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.786 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.786 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.786 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.786 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.786 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.787 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.787 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.787 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.787 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.787 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.788 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.788 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.788 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.788 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.788 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.789 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.789 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.789 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.789 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.789 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.789 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.790 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.790 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.790 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.790 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.790 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.790 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.791 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.791 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.791 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.791 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.791 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.792 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.792 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.792 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.792 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.792 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.792 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.793 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.793 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.793 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.793 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.793 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.793 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.794 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.794 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.794 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.794 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.794 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.795 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.795 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.795 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.795 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.795 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.795 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.796 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.796 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.796 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.796 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.796 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.797 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.797 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.797 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.797 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.797 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.797 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.798 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.798 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.798 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.799 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.799 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.799 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.799 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.799 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.799 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.800 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.800 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.800 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.800 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.800 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.801 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.801 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.801 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.801 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.801 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.802 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.802 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.802 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.802 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.802 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.803 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.803 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.803 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.803 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.803 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.803 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.804 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.804 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.804 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.804 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.804 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.805 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.805 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.805 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.805 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.805 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.805 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.806 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.806 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.806 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.806 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.806 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.807 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.807 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.807 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.807 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.807 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.808 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.808 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.808 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.808 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.808 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.808 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.809 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.809 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.809 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.809 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.809 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.810 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.810 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.810 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.810 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.810 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.810 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.811 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.811 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.811 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.811 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.811 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.811 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.812 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.812 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.812 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.812 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.812 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.813 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.813 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.813 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.813 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.813 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.813 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.814 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.814 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.814 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.814 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.814 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.815 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.815 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.815 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.815 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.815 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.815 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.816 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.816 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.816 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.816 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.816 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.817 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.817 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.817 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.817 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.817 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.817 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.818 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.818 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.818 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.818 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.818 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.819 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.819 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.819 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.819 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.819 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.819 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.820 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.820 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.820 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.820 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.820 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.821 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.821 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.821 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.821 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.821 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.821 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.822 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.822 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.822 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.822 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.822 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.823 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.823 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.823 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.823 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.823 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.823 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.824 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.824 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.824 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.824 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.824 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.824 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.825 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.825 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.825 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.825 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.826 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.826 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.826 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.826 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.826 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.827 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.827 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.827 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.827 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.827 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.828 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.828 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.828 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.828 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.828 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.828 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.829 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.829 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.829 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.829 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.829 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.830 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.830 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.830 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.830 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.830 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.830 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.831 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.831 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.831 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.831 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.831 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.832 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.832 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.832 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.832 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.832 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.832 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.833 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.833 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.833 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.833 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.833 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.833 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.834 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.834 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.834 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.834 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.834 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.835 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.835 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.835 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.835 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.836 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.836 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.836 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.836 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.836 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.837 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.837 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.837 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.837 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.837 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.837 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.838 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.838 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.838 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.838 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.838 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.839 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.839 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.839 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.839 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.840 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.840 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.840 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.840 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.840 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.840 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.841 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.841 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.841 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.841 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.841 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.842 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.842 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.842 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.842 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.842 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.842 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.843 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.843 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.843 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.843 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.843 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.843 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.844 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.844 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.844 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.844 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.844 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.845 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.845 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.845 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.845 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.845 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.845 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.846 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.846 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.846 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.846 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.846 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.847 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.847 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.847 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.847 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.847 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.847 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.848 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.848 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.848 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.848 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.848 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.849 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.849 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.849 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.849 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.849 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.849 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.850 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.850 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.850 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.850 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.850 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.850 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.851 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.851 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.851 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.851 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.851 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.852 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.852 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.852 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.852 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.852 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.852 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.853 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.853 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.853 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.853 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.853 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.854 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.854 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.854 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.854 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.854 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.854 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.855 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.855 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.855 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.855 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.855 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.855 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.856 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.856 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.856 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.856 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.856 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.857 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.857 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.857 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.857 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.857 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.857 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.858 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.858 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.858 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.858 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.858 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.859 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.859 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.859 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.859 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.859 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.859 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.860 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.860 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.860 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.860 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.860 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.861 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.861 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.861 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.861 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.861 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.861 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.862 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.862 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.862 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.862 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.862 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.862 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.863 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.863 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.863 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.864 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.864 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.864 227780 WARNING oslo_config.cfg [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 04:48:10 np0005534696 nova_compute[227776]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 04:48:10 np0005534696 nova_compute[227776]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 04:48:10 np0005534696 nova_compute[227776]: and ``live_migration_inbound_addr`` respectively.
Nov 25 04:48:10 np0005534696 nova_compute[227776]: ).  Its value may be silently ignored in the future.#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.864 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.865 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.865 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.865 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.865 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.865 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.865 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.866 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.866 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.866 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.866 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.866 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.867 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.867 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.867 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.867 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.867 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.868 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.868 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.rbd_secret_uuid        = af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.868 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.868 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.868 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.868 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.869 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.869 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.869 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.869 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.869 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.870 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.870 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.870 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.870 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.870 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.871 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.871 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.871 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.871 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.871 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.871 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.872 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.872 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.872 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.872 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.872 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.873 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.873 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.873 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.873 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.873 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.873 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.874 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.874 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.874 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.874 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.874 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.875 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.875 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.875 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.875 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.875 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.875 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.876 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.876 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.876 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.876 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.876 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.876 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.877 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.877 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.877 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.877 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.877 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.878 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.878 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.878 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.878 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.878 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.878 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.879 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.879 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.879 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.879 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.879 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.880 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.880 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.880 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.880 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.880 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.880 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.881 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.881 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.881 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.881 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.881 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.882 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.882 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.882 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.882 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.882 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.882 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.883 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.883 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.883 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.883 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.883 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.883 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.884 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.884 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.884 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.884 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.884 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.884 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.885 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.885 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.885 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.885 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.885 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.886 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.886 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.886 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.886 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.886 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.886 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.887 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.887 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.887 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.887 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.887 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.887 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.888 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.888 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.888 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.888 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.888 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.889 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.889 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.889 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.889 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.889 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.890 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.890 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.890 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.890 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.890 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.890 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.891 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.891 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.891 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.891 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.891 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.892 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.892 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.892 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.892 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.892 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.892 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.893 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.893 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.893 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.893 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.893 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.894 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.894 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.894 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.894 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.894 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.894 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.895 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.895 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.895 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.895 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.895 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.896 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.896 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.896 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.896 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.896 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.896 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.897 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.897 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.897 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.897 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.897 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.898 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.898 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.898 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.898 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.898 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.899 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.899 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.899 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.899 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.899 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.899 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.900 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.900 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.900 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.900 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.900 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.901 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.901 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.901 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.901 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.901 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.901 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.902 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.902 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.902 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.902 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.902 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.903 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.903 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.903 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.903 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.903 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.903 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.904 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.904 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.904 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.904 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.904 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.904 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.905 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.905 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.905 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.905 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.905 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.906 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.906 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.906 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.906 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.906 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.906 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.907 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.907 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.907 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.907 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.907 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.907 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.908 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.908 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.908 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.908 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.908 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.908 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.909 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.909 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.909 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.909 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.910 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.910 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.910 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.910 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.910 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.910 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.911 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.911 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.911 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.911 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.911 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.912 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.912 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.912 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.912 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.912 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.912 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.913 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.913 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.913 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.913 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.913 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.913 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.914 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.914 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.914 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.914 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.914 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.915 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.915 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.915 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.915 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.915 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.915 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.916 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.916 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.916 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.916 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.916 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.916 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.917 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.917 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.917 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.917 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.917 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.918 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.918 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.918 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.918 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.918 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.919 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.919 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.919 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.919 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.919 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.919 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.920 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.920 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.920 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.920 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.920 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.921 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.921 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.921 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.921 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.921 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.921 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.922 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.922 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.922 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.922 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.922 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.922 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.923 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.923 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.923 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.923 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.923 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.924 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.924 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.924 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.924 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.924 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.924 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.925 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.925 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.925 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.925 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.925 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.926 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.926 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.926 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.926 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.926 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.927 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.927 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.927 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.927 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.927 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.927 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.928 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.928 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.928 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.928 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.928 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.929 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.929 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.929 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.929 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.929 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.929 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.930 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.930 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.930 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.930 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.930 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.930 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.931 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.931 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.931 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.931 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.931 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.932 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.932 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.932 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.932 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.932 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.932 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.933 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.933 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.933 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.933 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.933 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.933 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.934 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.934 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.934 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.934 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.934 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.935 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.935 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.935 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.935 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.935 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.935 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.936 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.936 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.936 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.936 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.936 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.937 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.937 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.937 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.937 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.937 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.937 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.938 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.938 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.938 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.938 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.938 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.938 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.939 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.939 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.939 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.939 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.939 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.940 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.940 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.940 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.940 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.940 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.940 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.941 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.941 227780 DEBUG oslo_service.service [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.942 227780 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.950 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.950 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.951 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 25 04:48:10 np0005534696 nova_compute[227776]: 2025-11-25 09:48:10.951 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 25 04:48:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:10 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:10 np0005534696 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 04:48:10 np0005534696 systemd[1]: Started libvirt QEMU daemon.
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.007 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6fbcd20490> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.009 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6fbcd20490> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.010 227780 INFO nova.virt.libvirt.driver [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.021 227780 WARNING nova.virt.libvirt.driver [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Cannot update service status on host "compute-2.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.021 227780 DEBUG nova.virt.libvirt.volume.mount [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 25 04:48:11 np0005534696 python3.9[228449]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 04:48:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:11.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:11 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:48:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.770 227780 INFO nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <host>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <uuid>24892f32-0515-4f8a-815c-b9b89f76dd8c</uuid>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <cpu>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <arch>x86_64</arch>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model>EPYC-Milan-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <vendor>AMD</vendor>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <microcode version='167776725'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <signature family='25' model='1' stepping='1'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <maxphysaddr mode='emulate' bits='48'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='x2apic'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='tsc-deadline'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='osxsave'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='hypervisor'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='tsc_adjust'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='ospke'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='vaes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='vpclmulqdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='spec-ctrl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='stibp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='arch-capabilities'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='ssbd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='cmp_legacy'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='virt-ssbd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='lbrv'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='tsc-scale'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='vmcb-clean'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='pause-filter'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='pfthreshold'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='v-vmsave-vmload'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='vgif'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='rdctl-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='skip-l1dfl-vmentry'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='mds-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature name='pschange-mc-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <pages unit='KiB' size='4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <pages unit='KiB' size='2048'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <pages unit='KiB' size='1048576'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </cpu>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <power_management>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <suspend_mem/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </power_management>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <iommu support='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <migration_features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <live/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <uri_transports>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <uri_transport>tcp</uri_transport>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <uri_transport>rdma</uri_transport>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </uri_transports>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </migration_features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <topology>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <cells num='1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <cell id='0'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:          <memory unit='KiB'>7865372</memory>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:          <pages unit='KiB' size='4'>1966343</pages>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:          <pages unit='KiB' size='2048'>0</pages>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:          <distances>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:            <sibling id='0' value='10'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:          </distances>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:          <cpus num='4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:          </cpus>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        </cell>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </cells>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </topology>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <cache>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </cache>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <secmodel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model>selinux</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <doi>0</doi>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </secmodel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <secmodel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model>dac</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <doi>0</doi>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </secmodel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </host>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <guest>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <os_type>hvm</os_type>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <arch name='i686'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <wordsize>32</wordsize>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <domain type='qemu'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <domain type='kvm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </arch>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <pae/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <nonpae/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <acpi default='on' toggle='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <apic default='on' toggle='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <cpuselection/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <deviceboot/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <disksnapshot default='on' toggle='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <externalSnapshot/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </guest>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <guest>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <os_type>hvm</os_type>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <arch name='x86_64'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <wordsize>64</wordsize>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <domain type='qemu'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <domain type='kvm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </arch>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <acpi default='on' toggle='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <apic default='on' toggle='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <cpuselection/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <deviceboot/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <disksnapshot default='on' toggle='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <externalSnapshot/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </guest>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 
Nov 25 04:48:11 np0005534696 nova_compute[227776]: </capabilities>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: #033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.774 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.792 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 04:48:11 np0005534696 nova_compute[227776]: <domainCapabilities>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <domain>kvm</domain>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <arch>i686</arch>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <vcpu max='240'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <iothreads supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <os supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <enum name='firmware'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <loader supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>rom</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pflash</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='readonly'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>yes</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>no</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='secure'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>no</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </loader>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </os>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <cpu>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='host-passthrough' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='hostPassthroughMigratable'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>on</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>off</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='maximum' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='maximumMigratable'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>on</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>off</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='host-model' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <vendor>AMD</vendor>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='x2apic'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='hypervisor'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vaes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vpclmulqdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='stibp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='ssbd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='overflow-recov'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='succor'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='lbrv'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc-scale'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='flushbyasid'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='pause-filter'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='pfthreshold'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vgif'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='custom' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Denverton'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Denverton-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Genoa'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='auto-ibrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='auto-ibrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Milan-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-128'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-256'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-512'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v6'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v7'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='KnightsMill'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512er'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512pf'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='KnightsMill-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512er'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512pf'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G4-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tbm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G5-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tbm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SierraForest'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cmpccxadd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SierraForest-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cmpccxadd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='athlon'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='athlon-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='core2duo'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='core2duo-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='coreduo'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='coreduo-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='n270'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='n270-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='phenom'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='phenom-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </cpu>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <memoryBacking supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <enum name='sourceType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>file</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>anonymous</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>memfd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </memoryBacking>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <devices>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <disk supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='diskDevice'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>disk</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>cdrom</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>floppy</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>lun</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='bus'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>ide</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>fdc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>scsi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>sata</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-non-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </disk>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <graphics supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vnc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>egl-headless</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dbus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </graphics>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <video supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='modelType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vga</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>cirrus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>none</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>bochs</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>ramfb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </video>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <hostdev supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='mode'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>subsystem</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='startupPolicy'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>default</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>mandatory</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>requisite</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>optional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='subsysType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pci</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>scsi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='capsType'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='pciBackend'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </hostdev>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <rng supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-non-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>random</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>egd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>builtin</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </rng>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <filesystem supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='driverType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>path</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>handle</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtiofs</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </filesystem>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <tpm supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tpm-tis</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tpm-crb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>emulator</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>external</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendVersion'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>2.0</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </tpm>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <redirdev supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='bus'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </redirdev>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <channel supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pty</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>unix</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </channel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <crypto supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>qemu</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>builtin</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </crypto>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <interface supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>default</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>passt</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </interface>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <panic supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>isa</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>hyperv</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </panic>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <console supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>null</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pty</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dev</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>file</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pipe</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>stdio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>udp</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tcp</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>unix</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>qemu-vdagent</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dbus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </console>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </devices>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <gic supported='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <vmcoreinfo supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <genid supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <backingStoreInput supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <backup supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <async-teardown supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <ps2 supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <sev supported='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <sgx supported='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <hyperv supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='features'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>relaxed</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vapic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>spinlocks</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vpindex</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>runtime</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>synic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>stimer</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>reset</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vendor_id</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>frequencies</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>reenlightenment</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tlbflush</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>ipi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>avic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>emsr_bitmap</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>xmm_input</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <defaults>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <spinlocks>4095</spinlocks>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <stimer_direct>on</stimer_direct>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </defaults>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </hyperv>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <launchSecurity supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='sectype'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tdx</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </launchSecurity>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: </domainCapabilities>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.797 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 04:48:11 np0005534696 nova_compute[227776]: <domainCapabilities>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <domain>kvm</domain>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <arch>i686</arch>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <vcpu max='4096'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <iothreads supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <os supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <enum name='firmware'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <loader supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>rom</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pflash</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='readonly'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>yes</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>no</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='secure'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>no</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </loader>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </os>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <cpu>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='host-passthrough' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='hostPassthroughMigratable'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>on</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>off</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='maximum' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='maximumMigratable'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>on</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>off</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='host-model' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <vendor>AMD</vendor>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='x2apic'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='hypervisor'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vaes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vpclmulqdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='stibp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='ssbd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='overflow-recov'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='succor'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='lbrv'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc-scale'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='flushbyasid'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='pause-filter'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='pfthreshold'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vgif'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='custom' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Denverton'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Denverton-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Genoa'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='auto-ibrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='auto-ibrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Milan-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-128'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-256'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-512'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v6'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v7'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='KnightsMill'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512er'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512pf'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='KnightsMill-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512er'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512pf'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G4-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tbm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G5-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tbm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SierraForest'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cmpccxadd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SierraForest-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cmpccxadd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='athlon'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='athlon-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='core2duo'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='core2duo-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='coreduo'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='coreduo-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='n270'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='n270-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='phenom'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='phenom-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </cpu>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <memoryBacking supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <enum name='sourceType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>file</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>anonymous</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>memfd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </memoryBacking>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <devices>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <disk supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='diskDevice'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>disk</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>cdrom</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>floppy</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>lun</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='bus'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>fdc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>scsi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>sata</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-non-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </disk>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <graphics supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vnc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>egl-headless</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dbus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </graphics>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <video supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='modelType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vga</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>cirrus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>none</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>bochs</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>ramfb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </video>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <hostdev supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='mode'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>subsystem</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='startupPolicy'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>default</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>mandatory</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>requisite</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>optional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='subsysType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pci</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>scsi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='capsType'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='pciBackend'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </hostdev>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <rng supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-non-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>random</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>egd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>builtin</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </rng>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <filesystem supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='driverType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>path</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>handle</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtiofs</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </filesystem>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <tpm supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tpm-tis</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tpm-crb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>emulator</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>external</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendVersion'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>2.0</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </tpm>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <redirdev supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='bus'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </redirdev>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <channel supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pty</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>unix</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </channel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <crypto supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>qemu</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>builtin</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </crypto>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <interface supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>default</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>passt</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </interface>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <panic supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>isa</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>hyperv</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </panic>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <console supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>null</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pty</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dev</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>file</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pipe</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>stdio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>udp</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tcp</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>unix</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>qemu-vdagent</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dbus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </console>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </devices>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <gic supported='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <vmcoreinfo supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <genid supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <backingStoreInput supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <backup supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <async-teardown supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <ps2 supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <sev supported='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <sgx supported='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <hyperv supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='features'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>relaxed</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vapic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>spinlocks</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vpindex</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>runtime</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>synic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>stimer</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>reset</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vendor_id</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>frequencies</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>reenlightenment</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tlbflush</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>ipi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>avic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>emsr_bitmap</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>xmm_input</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <defaults>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <spinlocks>4095</spinlocks>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <stimer_direct>on</stimer_direct>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </defaults>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </hyperv>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <launchSecurity supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='sectype'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tdx</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </launchSecurity>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: </domainCapabilities>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.798 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.801 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 04:48:11 np0005534696 nova_compute[227776]: <domainCapabilities>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <domain>kvm</domain>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <arch>x86_64</arch>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <vcpu max='4096'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <iothreads supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <os supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <enum name='firmware'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>efi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <loader supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>rom</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pflash</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='readonly'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>yes</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>no</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='secure'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>yes</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>no</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </loader>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </os>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <cpu>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='host-passthrough' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='hostPassthroughMigratable'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>on</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>off</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='maximum' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='maximumMigratable'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>on</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>off</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='host-model' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <vendor>AMD</vendor>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='x2apic'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='hypervisor'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vaes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vpclmulqdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='stibp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='ssbd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='overflow-recov'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='succor'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='lbrv'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc-scale'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='flushbyasid'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='pause-filter'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='pfthreshold'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vgif'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='custom' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Denverton'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Denverton-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Genoa'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='auto-ibrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='auto-ibrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Milan-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:11 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4005940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-128'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-256'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-512'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v6'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v7'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='KnightsMill'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512er'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512pf'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='KnightsMill-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512er'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512pf'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G4-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tbm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G5-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tbm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SierraForest'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cmpccxadd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SierraForest-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cmpccxadd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='athlon'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='athlon-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='core2duo'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='core2duo-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='coreduo'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='coreduo-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='n270'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='n270-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='phenom'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='phenom-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </cpu>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <memoryBacking supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <enum name='sourceType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>file</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>anonymous</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>memfd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </memoryBacking>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <devices>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <disk supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='diskDevice'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>disk</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>cdrom</value>
Nov 25 04:48:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>floppy</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>lun</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='bus'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>fdc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>scsi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>sata</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-non-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </disk>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <graphics supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vnc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>egl-headless</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dbus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </graphics>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <video supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='modelType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vga</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>cirrus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>none</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>bochs</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>ramfb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </video>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <hostdev supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='mode'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>subsystem</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='startupPolicy'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>default</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>mandatory</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>requisite</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>optional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='subsysType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pci</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>scsi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='capsType'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='pciBackend'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </hostdev>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <rng supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-non-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>random</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>egd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>builtin</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </rng>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <filesystem supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='driverType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>path</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>handle</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtiofs</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </filesystem>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <tpm supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tpm-tis</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tpm-crb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>emulator</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>external</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendVersion'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>2.0</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </tpm>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <redirdev supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='bus'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </redirdev>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <channel supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pty</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>unix</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </channel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <crypto supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>qemu</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>builtin</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </crypto>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <interface supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>default</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>passt</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </interface>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <panic supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>isa</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>hyperv</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </panic>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <console supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>null</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pty</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dev</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>file</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pipe</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>stdio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>udp</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tcp</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>unix</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>qemu-vdagent</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dbus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </console>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </devices>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <gic supported='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <vmcoreinfo supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <genid supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <backingStoreInput supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <backup supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <async-teardown supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <ps2 supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <sev supported='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <sgx supported='no'/>
Nov 25 04:48:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <hyperv supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='features'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>relaxed</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vapic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>spinlocks</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vpindex</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>runtime</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>synic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>stimer</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>reset</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vendor_id</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>frequencies</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>reenlightenment</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tlbflush</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>ipi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>avic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>emsr_bitmap</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>xmm_input</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <defaults>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <spinlocks>4095</spinlocks>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <stimer_direct>on</stimer_direct>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </defaults>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </hyperv>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <launchSecurity supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='sectype'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tdx</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </launchSecurity>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: </domainCapabilities>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.843 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 04:48:11 np0005534696 nova_compute[227776]: <domainCapabilities>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <domain>kvm</domain>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <arch>x86_64</arch>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <vcpu max='240'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <iothreads supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <os supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <enum name='firmware'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <loader supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>rom</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pflash</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='readonly'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>yes</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>no</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='secure'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>no</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </loader>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </os>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <cpu>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='host-passthrough' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='hostPassthroughMigratable'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>on</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>off</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='maximum' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='maximumMigratable'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>on</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>off</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='host-model' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <vendor>AMD</vendor>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='x2apic'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='hypervisor'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vaes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vpclmulqdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='stibp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='ssbd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='overflow-recov'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='succor'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='lbrv'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='tsc-scale'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='flushbyasid'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='pause-filter'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='pfthreshold'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='vgif'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <mode name='custom' supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Broadwell-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Cooperlake-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Denverton'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Denverton-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Genoa'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='auto-ibrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='auto-ibrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='EPYC-Milan-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amd-psfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='stibp-always-on'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='GraniteRapids-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-128'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-256'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx10-512'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='prefetchiti'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Haswell-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v6'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Icelake-Server-v7'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='KnightsMill'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512er'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512pf'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='KnightsMill-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512er'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512pf'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G4-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tbm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Opteron_G5-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fma4'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tbm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xop'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SapphireRapids-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='amx-tile'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-bf16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-fp16'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bitalg'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrc'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fzrm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='la57'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='taa-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='xfd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SierraForest'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cmpccxadd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='SierraForest-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ifma'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cmpccxadd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fbsdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='fsrs'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ibrs-all'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mcdt-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='pbrsb-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='psdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='serialize'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Client-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='hle'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='rtm'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Skylake-Server-v5'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512bw'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512cd'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512dq'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512f'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='avx512vl'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='mpx'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v2'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v3'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='core-capability'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='split-lock-detect'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='Snowridge-v4'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='cldemote'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='gfni'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdir64b'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='movdiri'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='athlon'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='athlon-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='core2duo'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='core2duo-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='coreduo'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='coreduo-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='n270'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='n270-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='ss'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='phenom'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <blockers model='phenom-v1'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnow'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <feature name='3dnowext'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </blockers>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </mode>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </cpu>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <memoryBacking supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <enum name='sourceType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>file</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>anonymous</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <value>memfd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </memoryBacking>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <devices>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <disk supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='diskDevice'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>disk</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>cdrom</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>floppy</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>lun</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='bus'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>ide</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>fdc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>scsi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>sata</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-non-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </disk>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <graphics supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vnc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>egl-headless</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dbus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </graphics>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <video supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='modelType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vga</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>cirrus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>none</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>bochs</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>ramfb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </video>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <hostdev supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='mode'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>subsystem</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='startupPolicy'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>default</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>mandatory</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>requisite</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>optional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='subsysType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pci</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>scsi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='capsType'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='pciBackend'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </hostdev>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <rng supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtio-non-transitional</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>random</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>egd</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>builtin</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </rng>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <filesystem supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='driverType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>path</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>handle</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>virtiofs</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </filesystem>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <tpm supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tpm-tis</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tpm-crb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>emulator</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>external</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendVersion'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>2.0</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </tpm>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <redirdev supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='bus'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>usb</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </redirdev>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <channel supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pty</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>unix</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </channel>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <crypto supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>qemu</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendModel'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>builtin</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </crypto>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <interface supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='backendType'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>default</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>passt</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </interface>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <panic supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='model'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>isa</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>hyperv</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </panic>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <console supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='type'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>null</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vc</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pty</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dev</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>file</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>pipe</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>stdio</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>udp</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tcp</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>unix</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>qemu-vdagent</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>dbus</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </console>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </devices>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  <features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <gic supported='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <vmcoreinfo supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <genid supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <backingStoreInput supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <backup supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <async-teardown supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <ps2 supported='yes'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <sev supported='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <sgx supported='no'/>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <hyperv supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='features'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>relaxed</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vapic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>spinlocks</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vpindex</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>runtime</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>synic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>stimer</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>reset</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>vendor_id</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>frequencies</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>reenlightenment</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tlbflush</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>ipi</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>avic</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>emsr_bitmap</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>xmm_input</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <defaults>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <spinlocks>4095</spinlocks>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <stimer_direct>on</stimer_direct>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </defaults>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </hyperv>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    <launchSecurity supported='yes'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      <enum name='sectype'>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:        <value>tdx</value>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:      </enum>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:    </launchSecurity>
Nov 25 04:48:11 np0005534696 nova_compute[227776]:  </features>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: </domainCapabilities>
Nov 25 04:48:11 np0005534696 nova_compute[227776]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.883 227780 DEBUG nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.884 227780 INFO nova.virt.libvirt.host [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Secure Boot support detected#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.885 227780 INFO nova.virt.libvirt.driver [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.885 227780 INFO nova.virt.libvirt.driver [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.893 227780 DEBUG nova.virt.libvirt.driver [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.914 227780 INFO nova.virt.node [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Determined node identity e8eea1e0-1833-4152-af65-8b442fac3e0d from /var/lib/nova/compute_id#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.923 227780 WARNING nova.compute.manager [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Compute nodes ['e8eea1e0-1833-4152-af65-8b442fac3e0d'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.937 227780 INFO nova.compute.manager [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.961 227780 WARNING nova.compute.manager [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.962 227780 DEBUG oslo_concurrency.lockutils [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.962 227780 DEBUG oslo_concurrency.lockutils [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.962 227780 DEBUG oslo_concurrency.lockutils [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.962 227780 DEBUG nova.compute.resource_tracker [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:48:11 np0005534696 nova_compute[227776]: 2025-11-25 09:48:11.962 227780 DEBUG oslo_concurrency.processutils [None req-90ba44e7-5513-4560-8629-d39f1146ca1e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:48:12 np0005534696 python3.9[228632]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 04:48:12 np0005534696 systemd[1]: Stopping nova_compute container...
Nov 25 04:48:12 np0005534696 nova_compute[227776]: 2025-11-25 09:48:12.288 227780 DEBUG oslo_concurrency.lockutils [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:48:12 np0005534696 nova_compute[227776]: 2025-11-25 09:48:12.289 227780 DEBUG oslo_concurrency.lockutils [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:48:12 np0005534696 nova_compute[227776]: 2025-11-25 09:48:12.289 227780 DEBUG oslo_concurrency.lockutils [None req-ebfc0ccf-ed76-43b6-829e-06a4828c910f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:48:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:12.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:12 np0005534696 virtqemud[228342]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 04:48:12 np0005534696 systemd[1]: libpod-a756761bd5b8f5faad5740c43a4a76adddbbbc3f0b0743457b7bec3a64ebd079.scope: Deactivated successfully.
Nov 25 04:48:12 np0005534696 virtqemud[228342]: hostname: compute-2
Nov 25 04:48:12 np0005534696 virtqemud[228342]: End of file while reading data: Input/output error
Nov 25 04:48:12 np0005534696 systemd[1]: libpod-a756761bd5b8f5faad5740c43a4a76adddbbbc3f0b0743457b7bec3a64ebd079.scope: Consumed 3.095s CPU time.
Nov 25 04:48:12 np0005534696 podman[228656]: 2025-11-25 09:48:12.693541747 +0000 UTC m=+0.440231492 container died a756761bd5b8f5faad5740c43a4a76adddbbbc3f0b0743457b7bec3a64ebd079 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 25 04:48:12 np0005534696 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a756761bd5b8f5faad5740c43a4a76adddbbbc3f0b0743457b7bec3a64ebd079-userdata-shm.mount: Deactivated successfully.
Nov 25 04:48:12 np0005534696 systemd[1]: var-lib-containers-storage-overlay-e7529e8981d64fa12493ad77823f6459932fd2251014e8eefe76f36023bb201a-merged.mount: Deactivated successfully.
Nov 25 04:48:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:12 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4005940 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:13 np0005534696 podman[228656]: 2025-11-25 09:48:13.149865181 +0000 UTC m=+0.896554914 container cleanup a756761bd5b8f5faad5740c43a4a76adddbbbc3f0b0743457b7bec3a64ebd079 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 04:48:13 np0005534696 podman[228656]: nova_compute
Nov 25 04:48:13 np0005534696 podman[228681]: nova_compute
Nov 25 04:48:13 np0005534696 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 25 04:48:13 np0005534696 systemd[1]: Stopped nova_compute container.
Nov 25 04:48:13 np0005534696 systemd[1]: Starting nova_compute container...
Nov 25 04:48:13 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:48:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7529e8981d64fa12493ad77823f6459932fd2251014e8eefe76f36023bb201a/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7529e8981d64fa12493ad77823f6459932fd2251014e8eefe76f36023bb201a/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7529e8981d64fa12493ad77823f6459932fd2251014e8eefe76f36023bb201a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7529e8981d64fa12493ad77823f6459932fd2251014e8eefe76f36023bb201a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7529e8981d64fa12493ad77823f6459932fd2251014e8eefe76f36023bb201a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:13 np0005534696 podman[228691]: 2025-11-25 09:48:13.297064997 +0000 UTC m=+0.078003605 container init a756761bd5b8f5faad5740c43a4a76adddbbbc3f0b0743457b7bec3a64ebd079 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3)
Nov 25 04:48:13 np0005534696 podman[228691]: 2025-11-25 09:48:13.302672717 +0000 UTC m=+0.083611316 container start a756761bd5b8f5faad5740c43a4a76adddbbbc3f0b0743457b7bec3a64ebd079 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 04:48:13 np0005534696 podman[228691]: nova_compute
Nov 25 04:48:13 np0005534696 nova_compute[228704]: + sudo -E kolla_set_configs
Nov 25 04:48:13 np0005534696 systemd[1]: Started nova_compute container.
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Validating config file
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying service configuration files
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Deleting /etc/ceph
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Creating directory /etc/ceph
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Writing out command to execute
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 04:48:13 np0005534696 nova_compute[228704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 04:48:13 np0005534696 nova_compute[228704]: ++ cat /run_command
Nov 25 04:48:13 np0005534696 nova_compute[228704]: + CMD=nova-compute
Nov 25 04:48:13 np0005534696 nova_compute[228704]: + ARGS=
Nov 25 04:48:13 np0005534696 nova_compute[228704]: + sudo kolla_copy_cacerts
Nov 25 04:48:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:48:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:13.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:48:13 np0005534696 nova_compute[228704]: + [[ ! -n '' ]]
Nov 25 04:48:13 np0005534696 nova_compute[228704]: + . kolla_extend_start
Nov 25 04:48:13 np0005534696 nova_compute[228704]: Running command: 'nova-compute'
Nov 25 04:48:13 np0005534696 nova_compute[228704]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 04:48:13 np0005534696 nova_compute[228704]: + umask 0022
Nov 25 04:48:13 np0005534696 nova_compute[228704]: + exec nova-compute
Nov 25 04:48:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:13 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:14 np0005534696 python3.9[228869]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.444026) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094444047, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4676, "num_deletes": 502, "total_data_size": 12822587, "memory_usage": 12992992, "flush_reason": "Manual Compaction"}
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094463092, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8307853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13441, "largest_seqno": 18112, "table_properties": {"data_size": 8290133, "index_size": 11974, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36516, "raw_average_key_size": 19, "raw_value_size": 8253673, "raw_average_value_size": 4449, "num_data_blocks": 524, "num_entries": 1855, "num_filter_entries": 1855, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063652, "oldest_key_time": 1764063652, "file_creation_time": 1764064094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 19099 microseconds, and 10166 cpu microseconds.
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.463123) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8307853 bytes OK
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.463138) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.463477) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.463489) EVENT_LOG_v1 {"time_micros": 1764064094463486, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.463502) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12802193, prev total WAL file size 12802193, number of live WAL files 2.
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.465358) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8113KB)], [27(11MB)]
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094465417, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 20016578, "oldest_snapshot_seqno": -1}
Nov 25 04:48:14 np0005534696 systemd[1]: Started libpod-conmon-90bb4dc37a00a75684b7147243940103c2d11e384fa7ca8d98bc6c4098ec188c.scope.
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5046 keys, 15136139 bytes, temperature: kUnknown
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094501565, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15136139, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15097690, "index_size": 24707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 125971, "raw_average_key_size": 24, "raw_value_size": 15001586, "raw_average_value_size": 2972, "num_data_blocks": 1042, "num_entries": 5046, "num_filter_entries": 5046, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764064094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.501738) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15136139 bytes
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.502073) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 552.7 rd, 417.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.9, 11.2 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(4.2) write-amplify(1.8) OK, records in: 6068, records dropped: 1022 output_compression: NoCompression
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.502089) EVENT_LOG_v1 {"time_micros": 1764064094502080, "job": 14, "event": "compaction_finished", "compaction_time_micros": 36216, "compaction_time_cpu_micros": 22234, "output_level": 6, "num_output_files": 1, "total_output_size": 15136139, "num_input_records": 6068, "num_output_records": 5046, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094503006, "job": 14, "event": "table_file_deletion", "file_number": 29}
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064094504224, "job": 14, "event": "table_file_deletion", "file_number": 27}
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.465283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.504252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.504257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.504258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.504259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:48:14 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:48:14.504260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:48:14 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:48:14 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6088c0a12ee9f9666d01b2e2fb1e31b804e18bc6b6930c9c80f4361c0c55ce6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:14 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6088c0a12ee9f9666d01b2e2fb1e31b804e18bc6b6930c9c80f4361c0c55ce6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:14 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6088c0a12ee9f9666d01b2e2fb1e31b804e18bc6b6930c9c80f4361c0c55ce6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:14 np0005534696 podman[228890]: 2025-11-25 09:48:14.523552775 +0000 UTC m=+0.128770320 container init 90bb4dc37a00a75684b7147243940103c2d11e384fa7ca8d98bc6c4098ec188c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 25 04:48:14 np0005534696 podman[228890]: 2025-11-25 09:48:14.529086395 +0000 UTC m=+0.134303930 container start 90bb4dc37a00a75684b7147243940103c2d11e384fa7ca8d98bc6c4098ec188c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 04:48:14 np0005534696 python3.9[228869]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 25 04:48:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:14.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Applying nova statedir ownership
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 25 04:48:14 np0005534696 nova_compute_init[228908]: INFO:nova_statedir:Nova statedir ownership complete
Nov 25 04:48:14 np0005534696 systemd[1]: libpod-90bb4dc37a00a75684b7147243940103c2d11e384fa7ca8d98bc6c4098ec188c.scope: Deactivated successfully.
Nov 25 04:48:14 np0005534696 podman[228919]: 2025-11-25 09:48:14.635744222 +0000 UTC m=+0.027682512 container died 90bb4dc37a00a75684b7147243940103c2d11e384fa7ca8d98bc6c4098ec188c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=nova_compute_init)
Nov 25 04:48:14 np0005534696 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90bb4dc37a00a75684b7147243940103c2d11e384fa7ca8d98bc6c4098ec188c-userdata-shm.mount: Deactivated successfully.
Nov 25 04:48:14 np0005534696 systemd[1]: var-lib-containers-storage-overlay-e6088c0a12ee9f9666d01b2e2fb1e31b804e18bc6b6930c9c80f4361c0c55ce6-merged.mount: Deactivated successfully.
Nov 25 04:48:14 np0005534696 podman[228919]: 2025-11-25 09:48:14.682100191 +0000 UTC m=+0.074038471 container cleanup 90bb4dc37a00a75684b7147243940103c2d11e384fa7ca8d98bc6c4098ec188c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:b6e1e8a249d36ef36c6ac4170af1e043dda1ccc0f9672832d3ff151bf3533076', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, container_name=nova_compute_init)
Nov 25 04:48:14 np0005534696 systemd[1]: libpod-conmon-90bb4dc37a00a75684b7147243940103c2d11e384fa7ca8d98bc6c4098ec188c.scope: Deactivated successfully.
Nov 25 04:48:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:14 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.113 228708 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.114 228708 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.114 228708 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.114 228708 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 25 04:48:15 np0005534696 systemd[1]: session-52.scope: Deactivated successfully.
Nov 25 04:48:15 np0005534696 systemd[1]: session-52.scope: Consumed 1min 41.334s CPU time.
Nov 25 04:48:15 np0005534696 systemd-logind[744]: Session 52 logged out. Waiting for processes to exit.
Nov 25 04:48:15 np0005534696 systemd-logind[744]: Removed session 52.
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.335 228708 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.347 228708 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.347 228708 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 25 04:48:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:15.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.725 228708 INFO nova.virt.driver [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.807 228708 INFO nova.compute.provider_config [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.821 228708 DEBUG oslo_concurrency.lockutils [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.821 228708 DEBUG oslo_concurrency.lockutils [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.821 228708 DEBUG oslo_concurrency.lockutils [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.822 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.822 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.822 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.822 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.822 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.822 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.823 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.823 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.823 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.823 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.823 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.823 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.823 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.824 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.824 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.824 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.824 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.824 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.824 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.824 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.824 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.825 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.825 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.825 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.825 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.825 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.825 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.826 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.826 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.826 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.826 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.826 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.826 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.826 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.826 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.827 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.827 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.827 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.827 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.827 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.827 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.827 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.828 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.828 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.828 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.828 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.828 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.828 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.829 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.829 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.829 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.829 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.829 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.829 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.829 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.830 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.830 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.830 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.830 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.830 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.830 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.830 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.830 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.831 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.831 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.831 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.831 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.831 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.831 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.831 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.831 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.832 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.832 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.832 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.832 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.832 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.832 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.832 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.833 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.833 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.833 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.833 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.833 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.833 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.833 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.834 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.834 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.834 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.834 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.834 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.834 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.834 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.835 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.835 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.835 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.835 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.835 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.835 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.835 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.835 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.836 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.836 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.836 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.836 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.836 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.836 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.836 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.836 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.837 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.837 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.837 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.837 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.837 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.837 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.837 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.838 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.838 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.838 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.838 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.838 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.838 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.839 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.839 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.839 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.839 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.839 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.839 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.839 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.840 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.840 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.840 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.840 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.840 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.840 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.840 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.840 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.841 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.841 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.841 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.841 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.841 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.841 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.841 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.842 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.842 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.842 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.842 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.842 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.842 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.842 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.843 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.843 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.843 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.843 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.843 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.843 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.843 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.844 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.844 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.844 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.844 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.844 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.844 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.844 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.845 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.845 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.845 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.845 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.845 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.845 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.845 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.845 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.846 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.846 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.846 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.846 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.846 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.846 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.846 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.847 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.847 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.847 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.847 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.847 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.847 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.847 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.848 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.848 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.848 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.848 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.848 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.848 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.848 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.849 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.849 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.849 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.849 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.849 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.849 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.849 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.850 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.850 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.850 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.850 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.850 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.850 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.850 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.850 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.851 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.851 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.851 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.851 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.851 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.851 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.851 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.852 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.852 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.852 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.852 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.852 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.852 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.852 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.853 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.853 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.853 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.853 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.853 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.853 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.853 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.853 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.854 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.854 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.854 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.854 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.854 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.854 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.854 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.855 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.855 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.855 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.855 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.855 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.855 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.855 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.856 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.856 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.856 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.856 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.856 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.856 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.856 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.856 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.857 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.857 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.857 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.857 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.857 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.857 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.857 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.858 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.858 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.858 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.858 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.858 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.858 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.858 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.858 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.859 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.859 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.859 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.859 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.859 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.859 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.859 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.860 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.860 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.860 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.860 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.860 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.860 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.860 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.861 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.861 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.861 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.861 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.861 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.861 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.861 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.861 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.862 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.862 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.862 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.862 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.862 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.862 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.862 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.863 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.863 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.863 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.863 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.863 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.863 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.863 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.863 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.864 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.864 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.864 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.864 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.864 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.864 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.864 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.865 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.865 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.865 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.865 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.865 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.865 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.865 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.865 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.866 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.866 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.866 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.866 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.866 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.866 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.866 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.867 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.867 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.867 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.867 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.867 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.867 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.867 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.867 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.868 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.868 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.868 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.868 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.868 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.868 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.868 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.869 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.869 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.869 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.869 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.869 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.869 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.869 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.870 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.870 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.870 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.870 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.870 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.870 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.870 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.871 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.871 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.871 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.871 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.871 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:15 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd440021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.871 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.872 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.872 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.872 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.872 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.872 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.872 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.872 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.872 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.873 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.873 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.873 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.873 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.873 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.873 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.873 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.873 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.874 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.874 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.874 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.874 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.874 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.874 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.874 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.875 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.875 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.875 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.875 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.875 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.875 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.875 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.876 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.876 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.876 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.876 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.876 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.876 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.876 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.876 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.877 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.877 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.877 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.877 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.877 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.877 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.877 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.878 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.878 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.878 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.878 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.878 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.878 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.878 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.879 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.879 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.879 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.879 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.879 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.879 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.879 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.879 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.880 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.880 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.880 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.880 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.880 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.880 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.880 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.881 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.881 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.881 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.881 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.881 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.881 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.881 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.881 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.882 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.882 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.882 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.882 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.882 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.882 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.882 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.883 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.883 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.883 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.883 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.883 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.883 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.883 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.883 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.884 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.884 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.884 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.884 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.884 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.884 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.884 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.885 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.885 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.885 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.885 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.885 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.885 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.885 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.886 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.886 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.886 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.886 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.886 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.886 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.886 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.886 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.887 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.887 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.887 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.887 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.887 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.887 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.887 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.888 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.888 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.888 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.888 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.888 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.888 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.888 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.889 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.889 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.889 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.889 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.889 228708 WARNING oslo_config.cfg [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 04:48:15 np0005534696 nova_compute[228704]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 04:48:15 np0005534696 nova_compute[228704]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 04:48:15 np0005534696 nova_compute[228704]: and ``live_migration_inbound_addr`` respectively.
Nov 25 04:48:15 np0005534696 nova_compute[228704]: ).  Its value may be silently ignored in the future.#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.889 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.890 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.890 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.890 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.890 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.890 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.890 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.890 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.891 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.891 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.891 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.891 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.891 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.891 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.891 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.892 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.892 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.892 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.892 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.rbd_secret_uuid        = af1c9ae3-08d7-5547-a53d-2cccf7c6ef90 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.892 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.892 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.892 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.892 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.893 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.893 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.893 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.893 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.893 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.893 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.893 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.894 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.894 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.894 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.894 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.894 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.894 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.894 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.895 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.895 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.895 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.895 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.895 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.895 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.895 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.896 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.896 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.896 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.896 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.896 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.896 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.896 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.897 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.897 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.897 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.897 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.897 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.897 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.897 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.897 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.898 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.898 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.898 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.898 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.898 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.898 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.898 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.899 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.899 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.899 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.899 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.899 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.899 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.899 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.899 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.900 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.900 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.900 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.900 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.900 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.900 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.900 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.901 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.901 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.901 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.901 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.901 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.901 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.901 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.902 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.902 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.902 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.902 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.902 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.902 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.902 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.903 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.903 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.903 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.903 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.903 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.903 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.903 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.903 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.904 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.904 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.904 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.904 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.904 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.904 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.904 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.905 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.905 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.905 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.905 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.905 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.905 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.906 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.906 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.906 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.906 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.906 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.906 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.906 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.906 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.907 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.907 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.907 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.907 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.907 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.907 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.907 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.908 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.908 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.908 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.908 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.908 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.908 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.908 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.908 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.909 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.909 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.909 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.909 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.909 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.909 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.910 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.910 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.910 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.910 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.910 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.910 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.910 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.911 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.911 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.911 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.911 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.911 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.911 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.911 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.912 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.912 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.912 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.912 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.912 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.912 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.912 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.913 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.913 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.913 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.913 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.913 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.913 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.913 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.913 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.914 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.914 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.914 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.914 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.914 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.914 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.915 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.915 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.915 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.915 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.915 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.915 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.915 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.916 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.916 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.916 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.916 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.916 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.916 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.916 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.916 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.917 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.917 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.917 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.917 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.917 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.917 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.918 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.918 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.918 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.918 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.918 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.918 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.918 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.918 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.919 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.919 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.919 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.919 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.919 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.919 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.919 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.920 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.920 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.920 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.920 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.920 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.920 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.920 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.920 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.921 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.921 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.921 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.921 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.921 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.921 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.921 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.922 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.922 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.922 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.922 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.922 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.922 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.922 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.922 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.923 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.923 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.923 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.923 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.923 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.923 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.923 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.924 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.924 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.924 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.924 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.924 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.924 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.925 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.925 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.925 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.925 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.925 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.925 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.925 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.926 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.926 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.926 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.926 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.926 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.926 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.926 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.926 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.927 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.927 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.927 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.927 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.927 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.927 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.927 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.928 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.928 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.928 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.928 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.928 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.928 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.928 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.929 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.929 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.929 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.929 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.929 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.929 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.929 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.929 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.930 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.930 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.930 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.930 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.930 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.930 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.931 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.931 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.931 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.931 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.931 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.931 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.931 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.931 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.932 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.932 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.932 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.932 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.932 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.932 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.932 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.933 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.933 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.933 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.933 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.933 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.934 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.934 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.934 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.934 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.934 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.934 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.934 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.935 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.935 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.935 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.935 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.935 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.935 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.935 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.936 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.936 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.936 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.936 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.936 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.936 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.936 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.936 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.937 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.937 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.937 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.937 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.938 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.938 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.938 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.938 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.938 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.938 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.938 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.939 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.939 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.939 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.939 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.939 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.939 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.939 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.939 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.940 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.940 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.940 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.940 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.940 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.940 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.940 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.941 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.941 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.941 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.941 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.941 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.941 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.941 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.941 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.942 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.942 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.942 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.942 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.942 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.942 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.942 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.942 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.943 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.943 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.943 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.943 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.943 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.943 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.943 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.944 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.944 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.944 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.944 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.944 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.944 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.944 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.945 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.945 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.945 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.945 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.945 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.945 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.945 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.946 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.946 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.946 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.946 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.946 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.946 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.946 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.947 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.947 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.947 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.947 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.947 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.947 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.947 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.948 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.948 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.948 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.948 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.948 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.948 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.948 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.948 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.949 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.949 228708 DEBUG oslo_service.service [None req-bbe08059-8c1c-45e9-9142-389a5ef39b27 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.950 228708 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.965 228708 INFO nova.virt.node [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Determined node identity e8eea1e0-1833-4152-af65-8b442fac3e0d from /var/lib/nova/compute_id#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.966 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.967 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.967 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.967 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.977 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f14afde7790> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.979 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f14afde7790> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.979 228708 INFO nova.virt.libvirt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.983 228708 INFO nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  <host>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <uuid>24892f32-0515-4f8a-815c-b9b89f76dd8c</uuid>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <cpu>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <arch>x86_64</arch>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <model>EPYC-Milan-v1</model>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <vendor>AMD</vendor>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <microcode version='167776725'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <signature family='25' model='1' stepping='1'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <maxphysaddr mode='emulate' bits='48'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='x2apic'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='tsc-deadline'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='osxsave'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='hypervisor'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='tsc_adjust'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='ospke'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='vaes'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='vpclmulqdq'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='spec-ctrl'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='stibp'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='arch-capabilities'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='ssbd'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='cmp_legacy'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='virt-ssbd'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='lbrv'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='tsc-scale'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='vmcb-clean'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='pause-filter'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='pfthreshold'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='v-vmsave-vmload'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='vgif'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='rdctl-no'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='skip-l1dfl-vmentry'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='mds-no'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <feature name='pschange-mc-no'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <pages unit='KiB' size='4'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <pages unit='KiB' size='2048'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <pages unit='KiB' size='1048576'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </cpu>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <power_management>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <suspend_mem/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </power_management>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <iommu support='no'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <migration_features>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <live/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <uri_transports>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:        <uri_transport>tcp</uri_transport>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:        <uri_transport>rdma</uri_transport>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      </uri_transports>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </migration_features>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <topology>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <cells num='1'>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:        <cell id='0'>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:          <memory unit='KiB'>7865372</memory>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:          <pages unit='KiB' size='4'>1966343</pages>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:          <pages unit='KiB' size='2048'>0</pages>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:          <distances>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:            <sibling id='0' value='10'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:          </distances>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:          <cpus num='4'>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:          </cpus>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:        </cell>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      </cells>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </topology>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <cache>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </cache>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <secmodel>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <model>selinux</model>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <doi>0</doi>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </secmodel>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <secmodel>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <model>dac</model>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <doi>0</doi>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </secmodel>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  </host>
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  <guest>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <os_type>hvm</os_type>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <arch name='i686'>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <wordsize>32</wordsize>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <domain type='qemu'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <domain type='kvm'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </arch>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <features>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <pae/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <nonpae/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <acpi default='on' toggle='yes'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <apic default='on' toggle='no'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <cpuselection/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <deviceboot/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <disksnapshot default='on' toggle='no'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <externalSnapshot/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </features>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  </guest>
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  <guest>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <os_type>hvm</os_type>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <arch name='x86_64'>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <wordsize>64</wordsize>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <domain type='qemu'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <domain type='kvm'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </arch>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <features>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <acpi default='on' toggle='yes'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <apic default='on' toggle='no'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <cpuselection/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <deviceboot/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <disksnapshot default='on' toggle='no'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <externalSnapshot/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </features>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  </guest>
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 
Nov 25 04:48:15 np0005534696 nova_compute[228704]: </capabilities>
Nov 25 04:48:15 np0005534696 nova_compute[228704]: #033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.987 228708 DEBUG nova.virt.libvirt.volume.mount [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.991 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 25 04:48:15 np0005534696 nova_compute[228704]: 2025-11-25 09:48:15.993 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 04:48:15 np0005534696 nova_compute[228704]: <domainCapabilities>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  <domain>kvm</domain>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  <arch>i686</arch>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  <vcpu max='4096'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  <iothreads supported='yes'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  <os supported='yes'>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <enum name='firmware'/>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <loader supported='yes'>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:        <value>rom</value>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:        <value>pflash</value>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <enum name='readonly'>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:        <value>yes</value>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:        <value>no</value>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <enum name='secure'>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:        <value>no</value>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    </loader>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  </os>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:  <cpu>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:    <mode name='host-passthrough' supported='yes'>
Nov 25 04:48:15 np0005534696 nova_compute[228704]:      <enum name='hostPassthroughMigratable'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>on</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>off</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='maximum' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='maximumMigratable'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>on</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>off</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='host-model' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <vendor>AMD</vendor>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='x2apic'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='hypervisor'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vaes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vpclmulqdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='stibp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='ssbd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='overflow-recov'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='succor'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='lbrv'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc-scale'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='flushbyasid'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='pause-filter'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='pfthreshold'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vgif'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='custom' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Denverton'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Denverton-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Genoa'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='auto-ibrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='auto-ibrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Milan-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-128'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-256'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-512'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v6'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v7'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='KnightsMill'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512er'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512pf'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='KnightsMill-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512er'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512pf'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G4-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tbm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G5-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tbm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SierraForest'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cmpccxadd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SierraForest-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cmpccxadd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='athlon'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='athlon-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='core2duo'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='core2duo-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='coreduo'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='coreduo-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='n270'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='n270-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='phenom'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='phenom-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </cpu>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <memoryBacking supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <enum name='sourceType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>file</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>anonymous</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>memfd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </memoryBacking>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <devices>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <disk supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='diskDevice'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>disk</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>cdrom</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>floppy</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>lun</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='bus'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>fdc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>scsi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>sata</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-non-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <graphics supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vnc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>egl-headless</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dbus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </graphics>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <video supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='modelType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vga</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>cirrus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>none</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>bochs</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>ramfb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </video>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <hostdev supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='mode'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>subsystem</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='startupPolicy'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>default</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>mandatory</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>requisite</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>optional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='subsysType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pci</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>scsi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='capsType'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='pciBackend'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </hostdev>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <rng supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-non-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>random</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>egd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>builtin</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </rng>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <filesystem supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='driverType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>path</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>handle</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtiofs</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </filesystem>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <tpm supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tpm-tis</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tpm-crb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>emulator</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>external</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendVersion'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>2.0</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </tpm>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <redirdev supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='bus'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </redirdev>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <channel supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pty</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>unix</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </channel>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <crypto supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>qemu</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>builtin</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </crypto>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <interface supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>default</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>passt</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </interface>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <panic supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>isa</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>hyperv</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </panic>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <console supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>null</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pty</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dev</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>file</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pipe</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>stdio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>udp</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tcp</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>unix</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>qemu-vdagent</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dbus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </console>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </devices>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <features>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <gic supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <vmcoreinfo supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <genid supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <backingStoreInput supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <backup supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <async-teardown supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <ps2 supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <sev supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <sgx supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <hyperv supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='features'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>relaxed</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vapic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>spinlocks</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vpindex</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>runtime</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>synic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>stimer</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>reset</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vendor_id</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>frequencies</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>reenlightenment</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tlbflush</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>ipi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>avic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>emsr_bitmap</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>xmm_input</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <defaults>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <spinlocks>4095</spinlocks>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <stimer_direct>on</stimer_direct>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </defaults>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </hyperv>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <launchSecurity supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='sectype'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tdx</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </launchSecurity>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </features>
Nov 25 04:48:16 np0005534696 nova_compute[228704]: </domainCapabilities>
Nov 25 04:48:16 np0005534696 nova_compute[228704]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.002 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 04:48:16 np0005534696 nova_compute[228704]: <domainCapabilities>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <domain>kvm</domain>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <arch>i686</arch>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <vcpu max='240'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <iothreads supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <os supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <enum name='firmware'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <loader supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>rom</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pflash</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='readonly'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>yes</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>no</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='secure'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>no</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </loader>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </os>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <cpu>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='host-passthrough' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='hostPassthroughMigratable'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>on</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>off</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='maximum' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='maximumMigratable'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>on</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>off</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='host-model' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <vendor>AMD</vendor>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='x2apic'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='hypervisor'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vaes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vpclmulqdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='stibp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='ssbd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='overflow-recov'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='succor'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='lbrv'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc-scale'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='flushbyasid'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='pause-filter'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='pfthreshold'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vgif'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='custom' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Denverton'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Denverton-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Genoa'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='auto-ibrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='auto-ibrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Milan-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-128'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-256'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-512'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v6'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v7'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='KnightsMill'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512er'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512pf'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='KnightsMill-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512er'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512pf'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G4-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tbm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G5-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tbm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SierraForest'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cmpccxadd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SierraForest-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cmpccxadd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='athlon'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='athlon-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='core2duo'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='core2duo-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='coreduo'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='coreduo-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='n270'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='n270-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='phenom'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='phenom-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </cpu>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <memoryBacking supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <enum name='sourceType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>file</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>anonymous</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>memfd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </memoryBacking>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <devices>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <disk supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='diskDevice'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>disk</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>cdrom</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>floppy</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>lun</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='bus'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>ide</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>fdc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>scsi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>sata</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-non-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <graphics supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vnc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>egl-headless</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dbus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </graphics>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <video supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='modelType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vga</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>cirrus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>none</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>bochs</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>ramfb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </video>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <hostdev supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='mode'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>subsystem</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='startupPolicy'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>default</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>mandatory</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>requisite</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>optional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='subsysType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pci</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>scsi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='capsType'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='pciBackend'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </hostdev>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <rng supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-non-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>random</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>egd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>builtin</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </rng>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <filesystem supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='driverType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>path</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>handle</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtiofs</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </filesystem>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <tpm supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tpm-tis</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tpm-crb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>emulator</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>external</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendVersion'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>2.0</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </tpm>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <redirdev supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='bus'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </redirdev>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <channel supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pty</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>unix</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </channel>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <crypto supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>qemu</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>builtin</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </crypto>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <interface supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>default</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>passt</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </interface>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <panic supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>isa</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>hyperv</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </panic>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <console supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>null</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pty</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dev</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>file</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pipe</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>stdio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>udp</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tcp</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>unix</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>qemu-vdagent</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dbus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </console>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </devices>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <features>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <gic supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <vmcoreinfo supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <genid supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <backingStoreInput supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <backup supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <async-teardown supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <ps2 supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <sev supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <sgx supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <hyperv supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='features'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>relaxed</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vapic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>spinlocks</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vpindex</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>runtime</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>synic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>stimer</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>reset</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vendor_id</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>frequencies</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>reenlightenment</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tlbflush</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>ipi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>avic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>emsr_bitmap</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>xmm_input</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <defaults>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <spinlocks>4095</spinlocks>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <stimer_direct>on</stimer_direct>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </defaults>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </hyperv>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <launchSecurity supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='sectype'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tdx</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </launchSecurity>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </features>
Nov 25 04:48:16 np0005534696 nova_compute[228704]: </domainCapabilities>
Nov 25 04:48:16 np0005534696 nova_compute[228704]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.008 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.011 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 04:48:16 np0005534696 nova_compute[228704]: <domainCapabilities>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <domain>kvm</domain>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <arch>x86_64</arch>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <vcpu max='4096'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <iothreads supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <os supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <enum name='firmware'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>efi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <loader supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>rom</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pflash</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='readonly'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>yes</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>no</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='secure'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>yes</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>no</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </loader>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </os>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <cpu>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='host-passthrough' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='hostPassthroughMigratable'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>on</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>off</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='maximum' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='maximumMigratable'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>on</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>off</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='host-model' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <vendor>AMD</vendor>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='x2apic'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='hypervisor'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vaes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vpclmulqdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='stibp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='ssbd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='overflow-recov'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='succor'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='lbrv'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc-scale'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='flushbyasid'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='pause-filter'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='pfthreshold'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vgif'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='custom' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Denverton'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Denverton-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Genoa'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='auto-ibrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='auto-ibrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Milan-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-128'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-256'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-512'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v6'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v7'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='KnightsMill'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512er'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512pf'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='KnightsMill-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512er'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512pf'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G4-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tbm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G5-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tbm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SierraForest'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cmpccxadd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SierraForest-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cmpccxadd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='athlon'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='athlon-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='core2duo'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='core2duo-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='coreduo'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='coreduo-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='n270'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='n270-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='phenom'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='phenom-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </cpu>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <memoryBacking supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <enum name='sourceType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>file</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>anonymous</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>memfd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </memoryBacking>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <devices>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <disk supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='diskDevice'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>disk</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>cdrom</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>floppy</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>lun</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='bus'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>fdc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>scsi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>sata</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-non-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <graphics supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vnc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>egl-headless</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dbus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </graphics>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <video supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='modelType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vga</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>cirrus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>none</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>bochs</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>ramfb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </video>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <hostdev supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='mode'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>subsystem</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='startupPolicy'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>default</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>mandatory</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>requisite</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>optional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='subsysType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pci</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>scsi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='capsType'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='pciBackend'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </hostdev>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <rng supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-non-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>random</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>egd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>builtin</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </rng>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <filesystem supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='driverType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>path</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>handle</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtiofs</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </filesystem>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <tpm supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tpm-tis</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tpm-crb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>emulator</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>external</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendVersion'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>2.0</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </tpm>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <redirdev supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='bus'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </redirdev>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <channel supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pty</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>unix</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </channel>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <crypto supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>qemu</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>builtin</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </crypto>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <interface supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>default</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>passt</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </interface>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <panic supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>isa</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>hyperv</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </panic>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <console supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>null</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pty</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dev</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>file</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pipe</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>stdio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>udp</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tcp</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>unix</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>qemu-vdagent</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dbus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </console>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </devices>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <features>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <gic supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <vmcoreinfo supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <genid supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <backingStoreInput supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <backup supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <async-teardown supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <ps2 supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <sev supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <sgx supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <hyperv supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='features'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>relaxed</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vapic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>spinlocks</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vpindex</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>runtime</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>synic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>stimer</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>reset</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vendor_id</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>frequencies</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>reenlightenment</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tlbflush</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>ipi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>avic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>emsr_bitmap</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>xmm_input</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <defaults>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <spinlocks>4095</spinlocks>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <stimer_direct>on</stimer_direct>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </defaults>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </hyperv>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <launchSecurity supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='sectype'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tdx</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </launchSecurity>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </features>
Nov 25 04:48:16 np0005534696 nova_compute[228704]: </domainCapabilities>
Nov 25 04:48:16 np0005534696 nova_compute[228704]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.076 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 04:48:16 np0005534696 nova_compute[228704]: <domainCapabilities>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <path>/usr/libexec/qemu-kvm</path>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <domain>kvm</domain>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <arch>x86_64</arch>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <vcpu max='240'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <iothreads supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <os supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <enum name='firmware'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <loader supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>rom</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pflash</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='readonly'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>yes</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>no</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='secure'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>no</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </loader>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </os>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <cpu>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='host-passthrough' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='hostPassthroughMigratable'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>on</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>off</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='maximum' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='maximumMigratable'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>on</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>off</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='host-model' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model fallback='forbid'>EPYC-Milan</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <vendor>AMD</vendor>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <maxphysaddr mode='passthrough' limit='48'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='x2apic'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc-deadline'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='hypervisor'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc_adjust'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vaes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vpclmulqdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='spec-ctrl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='stibp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='ssbd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='cmp_legacy'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='overflow-recov'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='succor'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='virt-ssbd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='lbrv'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='tsc-scale'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vmcb-clean'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='flushbyasid'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='pause-filter'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='pfthreshold'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='v-vmsave-vmload'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='vgif'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <mode name='custom' supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Broadwell-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cascadelake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Cooperlake-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Denverton'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Denverton-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Genoa'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='auto-ibrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Genoa-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='auto-ibrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='EPYC-Milan-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amd-psfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='no-nested-data-bp'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='null-sel-clr-base'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='stibp-always-on'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='GraniteRapids-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-128'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-256'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx10-512'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='prefetchiti'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Haswell-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-noTSX'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v6'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Icelake-Server-v7'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='KnightsMill'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512er'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512pf'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='KnightsMill-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4fmaps'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-4vnniw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512er'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512pf'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G4-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tbm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Opteron_G5-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fma4'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tbm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xop'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SapphireRapids-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='amx-tile'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-bf16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-fp16'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512-vpopcntdq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bitalg'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vbmi2'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrc'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fzrm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='la57'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='taa-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='tsx-ldtrk'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='xfd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SierraForest'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cmpccxadd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='SierraForest-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ifma'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-ne-convert'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx-vnni-int8'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='bus-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cmpccxadd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fbsdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='fsrs'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ibrs-all'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mcdt-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='pbrsb-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='psdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='sbdr-ssdp-no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='serialize'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Client-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='hle'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='rtm'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Skylake-Server-v5'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512bw'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512cd'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512dq'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512f'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='avx512vl'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='mpx'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v2'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v3'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='core-capability'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='split-lock-detect'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='Snowridge-v4'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='cldemote'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='gfni'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdir64b'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='movdiri'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='athlon'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='athlon-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='core2duo'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='core2duo-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='coreduo'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='coreduo-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='n270'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='n270-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='ss'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='phenom'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <blockers model='phenom-v1'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnow'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <feature name='3dnowext'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </blockers>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </mode>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </cpu>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <memoryBacking supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <enum name='sourceType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>file</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>anonymous</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <value>memfd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </memoryBacking>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <devices>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <disk supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='diskDevice'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>disk</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>cdrom</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>floppy</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>lun</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='bus'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>ide</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>fdc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>scsi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>sata</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-non-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <graphics supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vnc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>egl-headless</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dbus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </graphics>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <video supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='modelType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vga</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>cirrus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>none</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>bochs</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>ramfb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </video>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <hostdev supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='mode'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>subsystem</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='startupPolicy'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>default</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>mandatory</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>requisite</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>optional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='subsysType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pci</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>scsi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='capsType'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='pciBackend'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </hostdev>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <rng supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtio-non-transitional</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>random</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>egd</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>builtin</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </rng>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <filesystem supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='driverType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>path</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>handle</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>virtiofs</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </filesystem>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <tpm supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tpm-tis</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tpm-crb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>emulator</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>external</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendVersion'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>2.0</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </tpm>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <redirdev supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='bus'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>usb</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </redirdev>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <channel supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pty</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>unix</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </channel>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <crypto supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>qemu</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendModel'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>builtin</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </crypto>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <interface supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='backendType'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>default</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>passt</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </interface>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <panic supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='model'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>isa</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>hyperv</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </panic>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <console supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='type'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>null</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vc</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pty</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dev</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>file</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>pipe</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>stdio</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>udp</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tcp</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>unix</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>qemu-vdagent</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>dbus</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </console>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </devices>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  <features>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <gic supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <vmcoreinfo supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <genid supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <backingStoreInput supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <backup supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <async-teardown supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <ps2 supported='yes'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <sev supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <sgx supported='no'/>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <hyperv supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='features'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>relaxed</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vapic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>spinlocks</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vpindex</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>runtime</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>synic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>stimer</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>reset</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>vendor_id</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>frequencies</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>reenlightenment</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tlbflush</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>ipi</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>avic</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>emsr_bitmap</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>xmm_input</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <defaults>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <spinlocks>4095</spinlocks>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <stimer_direct>on</stimer_direct>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <tlbflush_direct>on</tlbflush_direct>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <tlbflush_extended>on</tlbflush_extended>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </defaults>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </hyperv>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    <launchSecurity supported='yes'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      <enum name='sectype'>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:        <value>tdx</value>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:      </enum>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:    </launchSecurity>
Nov 25 04:48:16 np0005534696 nova_compute[228704]:  </features>
Nov 25 04:48:16 np0005534696 nova_compute[228704]: </domainCapabilities>
Nov 25 04:48:16 np0005534696 nova_compute[228704]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.127 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.128 228708 INFO nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Secure Boot support detected#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.129 228708 INFO nova.virt.libvirt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.129 228708 INFO nova.virt.libvirt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.136 228708 DEBUG nova.virt.libvirt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.186 228708 INFO nova.virt.node [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Determined node identity e8eea1e0-1833-4152-af65-8b442fac3e0d from /var/lib/nova/compute_id#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.196 228708 WARNING nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Compute nodes ['e8eea1e0-1833-4152-af65-8b442fac3e0d'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.220 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.232 228708 WARNING nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.233 228708 DEBUG oslo_concurrency.lockutils [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.233 228708 DEBUG oslo_concurrency.lockutils [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.233 228708 DEBUG oslo_concurrency.lockutils [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.233 228708 DEBUG nova.compute.resource_tracker [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.234 228708 DEBUG oslo_concurrency.processutils [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:48:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:16.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:48:16 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3156523680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:48:16 np0005534696 nova_compute[228704]: 2025-11-25 09:48:16.595 228708 DEBUG oslo_concurrency.processutils [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:48:16 np0005534696 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 04:48:16 np0005534696 systemd[1]: Started libvirt nodedev daemon.
Nov 25 04:48:16 np0005534696 podman[229012]: 2025-11-25 09:48:16.717231121 +0000 UTC m=+0.078469493 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 04:48:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094816 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:48:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:16 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.001 228708 WARNING nova.virt.libvirt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.002 228708 DEBUG nova.compute.resource_tracker [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5217MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.003 228708 DEBUG oslo_concurrency.lockutils [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.003 228708 DEBUG oslo_concurrency.lockutils [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.012 228708 WARNING nova.compute.resource_tracker [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] No compute node record for compute-2.ctlplane.example.com:e8eea1e0-1833-4152-af65-8b442fac3e0d: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host e8eea1e0-1833-4152-af65-8b442fac3e0d could not be found.#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.023 228708 INFO nova.compute.resource_tracker [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Compute node record created for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com with uuid: e8eea1e0-1833-4152-af65-8b442fac3e0d#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.063 228708 DEBUG nova.compute.resource_tracker [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.064 228708 DEBUG nova.compute.resource_tracker [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.179 228708 INFO nova.scheduler.client.report [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [req-fb4f6d7e-c348-4079-8f9b-130e8786f350] Created resource provider record via placement API for resource provider with UUID e8eea1e0-1833-4152-af65-8b442fac3e0d and name compute-2.ctlplane.example.com.#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.226 228708 DEBUG oslo_concurrency.processutils [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:48:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:17.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:48:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1737525609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:48:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.593 228708 DEBUG oslo_concurrency.processutils [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.598 228708 DEBUG nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 25 04:48:17 np0005534696 nova_compute[228704]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.598 228708 INFO nova.virt.libvirt.host [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.598 228708 DEBUG nova.compute.provider_tree [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Updating inventory in ProviderTree for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.599 228708 DEBUG nova.virt.libvirt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.650 228708 DEBUG nova.scheduler.client.report [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Updated inventory for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.650 228708 DEBUG nova.compute.provider_tree [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Updating resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.651 228708 DEBUG nova.compute.provider_tree [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Updating inventory in ProviderTree for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.726 228708 DEBUG nova.compute.provider_tree [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Updating resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.745 228708 DEBUG nova.compute.resource_tracker [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.745 228708 DEBUG oslo_concurrency.lockutils [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.745 228708 DEBUG nova.service [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.776 228708 DEBUG nova.service [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 25 04:48:17 np0005534696 nova_compute[228704]: 2025-11-25 09:48:17.777 228708 DEBUG nova.servicegroup.drivers.db [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] DB_Driver: join new ServiceGroup member compute-2.ctlplane.example.com to the compute group, service = <Service: host=compute-2.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 25 04:48:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:17 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:18.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:18 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda8002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:19.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4007980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:19 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002210 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:20.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:20 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:21.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda8003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:21 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4007980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:22 np0005534696 podman[229112]: 2025-11-25 09:48:22.32918043 +0000 UTC m=+0.042297340 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 04:48:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:22.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:22 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:23.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:23 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda8003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:24.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:24 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4007980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:25 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:26.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:26 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd4c0099c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:27.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:27 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:48:27 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:48:27 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:48:27 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:48:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4007980 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:27 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:28.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:28 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:29.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:29 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb0003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:30 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:48:30 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:48:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:30.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:30 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4007a50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:31.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda80041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:31 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:32.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:32 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb0004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:33.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4007a70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:33 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda80041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:34.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:34 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:48:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:35.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:48:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb0004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:35 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4007a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:36.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:36 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda8004ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:48:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:37.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:48:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:37 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb0004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:38 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 04:48:38 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3835819779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:48:38 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 04:48:38 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3835819779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:48:38 np0005534696 podman[229251]: 2025-11-25 09:48:38.323426489 +0000 UTC m=+0.036380332 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:48:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:38.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:38 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda4007ab0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:39.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda8004ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:39 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdd44002250 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:40.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:40 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb00057d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:48:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:41.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:48:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fddb00057d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:48:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[147373]: 25/11/2025 09:48:41 : epoch 69257a05 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdda8004ed0 fd 48 proxy ignored for local
Nov 25 04:48:41 np0005534696 kernel: ganesha.nfsd[226295]: segfault at 50 ip 00007fddf6be932e sp 00007fddbb7fd210 error 4 in libntirpc.so.5.8[7fddf6bce000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 25 04:48:41 np0005534696 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 04:48:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:41 np0005534696 systemd[1]: Started Process Core Dump (PID 229296/UID 0).
Nov 25 04:48:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:42.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:42 np0005534696 systemd-coredump[229297]: Process 147396 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 74:#012#0  0x00007fddf6be932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 25 04:48:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:42 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 04:48:42 np0005534696 systemd[1]: systemd-coredump@4-229296-0.service: Deactivated successfully.
Nov 25 04:48:42 np0005534696 podman[229303]: 2025-11-25 09:48:42.982658974 +0000 UTC m=+0.018558159 container died e9b26ee0cdfd1574982440200acae1b90f9cb988aa79eb6267e87f95f3cd119b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 25 04:48:42 np0005534696 systemd[1]: var-lib-containers-storage-overlay-9175878127aec9dace65bda13ce469f90169e555e1d4de5a83318ab63b9f968e-merged.mount: Deactivated successfully.
Nov 25 04:48:43 np0005534696 podman[229303]: 2025-11-25 09:48:43.004509923 +0000 UTC m=+0.040409099 container remove e9b26ee0cdfd1574982440200acae1b90f9cb988aa79eb6267e87f95f3cd119b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:48:43 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Main process exited, code=exited, status=139/n/a
Nov 25 04:48:43 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Failed with result 'exit-code'.
Nov 25 04:48:43 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.357s CPU time.
Nov 25 04:48:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:43.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:48:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:44.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:48:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:48:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:45.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:48:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:46.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:47 np0005534696 podman[229342]: 2025-11-25 09:48:47.345003921 +0000 UTC m=+0.054842481 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 04:48:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:47.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094847 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:48:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:48.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:49.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:48:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:50.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:48:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:51.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:52.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:53 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Scheduled restart job, restart counter is at 5.
Nov 25 04:48:53 np0005534696 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:48:53 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.357s CPU time.
Nov 25 04:48:53 np0005534696 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:48:53 np0005534696 podman[229371]: 2025-11-25 09:48:53.337569155 +0000 UTC m=+0.045888658 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:48:53 np0005534696 podman[229426]: 2025-11-25 09:48:53.429394619 +0000 UTC m=+0.027246091 container create 3c7df93273eb0852870cce0748063fbbcefdf9264f43749d30ffeddd3d7935c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:48:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:53.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:53 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba21640a8bbd3430b55a4c4973a965820b0186c5473bf2b3677fc7d35ae2a09d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:53 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba21640a8bbd3430b55a4c4973a965820b0186c5473bf2b3677fc7d35ae2a09d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:53 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba21640a8bbd3430b55a4c4973a965820b0186c5473bf2b3677fc7d35ae2a09d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:53 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba21640a8bbd3430b55a4c4973a965820b0186c5473bf2b3677fc7d35ae2a09d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.jouchy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:48:53 np0005534696 podman[229426]: 2025-11-25 09:48:53.466736371 +0000 UTC m=+0.064587853 container init 3c7df93273eb0852870cce0748063fbbcefdf9264f43749d30ffeddd3d7935c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:48:53 np0005534696 podman[229426]: 2025-11-25 09:48:53.470530205 +0000 UTC m=+0.068381677 container start 3c7df93273eb0852870cce0748063fbbcefdf9264f43749d30ffeddd3d7935c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 25 04:48:53 np0005534696 bash[229426]: 3c7df93273eb0852870cce0748063fbbcefdf9264f43749d30ffeddd3d7935c6
Nov 25 04:48:53 np0005534696 podman[229426]: 2025-11-25 09:48:53.417756422 +0000 UTC m=+0.015607914 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:48:53 np0005534696 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:48:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:48:53 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 04:48:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:48:53 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 04:48:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:48:53 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 04:48:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:48:53 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 04:48:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:48:53 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 04:48:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:48:53 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 04:48:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:48:53 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 04:48:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:48:53 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:48:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:54.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:55.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:48:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:56.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:57.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:48:58.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:48:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:48:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:48:59.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:48:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:48:59 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:48:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:48:59 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:48:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:48:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:48:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:48:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:00.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:01.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:02.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:03.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:04.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:04 np0005534696 nova_compute[228704]: 2025-11-25 09:49:04.779 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:49:04 np0005534696 nova_compute[228704]: 2025-11-25 09:49:04.824 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:49:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:49:05.342 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:49:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:49:05.342 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:49:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:49:05.343 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:49:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:05.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d64000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:49:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d540038f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:06.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:07 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:07.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:07 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d540038f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.840442) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147840473, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 782, "num_deletes": 250, "total_data_size": 1567396, "memory_usage": 1590240, "flush_reason": "Manual Compaction"}
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147843316, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 693820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18118, "largest_seqno": 18894, "table_properties": {"data_size": 690614, "index_size": 1050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8478, "raw_average_key_size": 20, "raw_value_size": 683831, "raw_average_value_size": 1628, "num_data_blocks": 46, "num_entries": 420, "num_filter_entries": 420, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064095, "oldest_key_time": 1764064095, "file_creation_time": 1764064147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 2918 microseconds, and 2073 cpu microseconds.
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.843356) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 693820 bytes OK
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.843373) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.844002) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.844016) EVENT_LOG_v1 {"time_micros": 1764064147844012, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.844029) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1563280, prev total WAL file size 1563280, number of live WAL files 2.
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.844450) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(677KB)], [30(14MB)]
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147844472, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 15829959, "oldest_snapshot_seqno": -1}
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4973 keys, 12085412 bytes, temperature: kUnknown
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147871434, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 12085412, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12051400, "index_size": 20462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 124870, "raw_average_key_size": 25, "raw_value_size": 11960548, "raw_average_value_size": 2405, "num_data_blocks": 856, "num_entries": 4973, "num_filter_entries": 4973, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764064147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.871614) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12085412 bytes
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.872090) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 585.9 rd, 447.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 14.4 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(40.2) write-amplify(17.4) OK, records in: 5466, records dropped: 493 output_compression: NoCompression
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.872105) EVENT_LOG_v1 {"time_micros": 1764064147872098, "job": 16, "event": "compaction_finished", "compaction_time_micros": 27016, "compaction_time_cpu_micros": 19155, "output_level": 6, "num_output_files": 1, "total_output_size": 12085412, "num_input_records": 5466, "num_output_records": 4973, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147872284, "job": 16, "event": "table_file_deletion", "file_number": 32}
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064147874304, "job": 16, "event": "table_file_deletion", "file_number": 30}
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.844400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.874366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.874370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.874372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.874373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:49:07 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:49:07.874374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:49:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094907 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:49:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:07 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:08.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:09 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:09 np0005534696 podman[229536]: 2025-11-25 09:49:09.336181313 +0000 UTC m=+0.042710202 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 04:49:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:09.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:09 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:09 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d540038f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:10.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:11 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:11.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:11 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:11 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:12.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:13 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54004600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:13.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:13 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:13 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:14.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:15 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.358 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.358 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.359 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.359 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.381 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.381 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.381 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.381 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.381 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.382 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.382 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.382 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.382 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.424 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.424 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.425 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.425 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.425 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:49:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:15.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:15 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:49:15 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3660985114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.772 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:49:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:15 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.989 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.991 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5281MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.991 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:49:15 np0005534696 nova_compute[228704]: 2025-11-25 09:49:15.991 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:49:16 np0005534696 nova_compute[228704]: 2025-11-25 09:49:16.096 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:49:16 np0005534696 nova_compute[228704]: 2025-11-25 09:49:16.096 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:49:16 np0005534696 nova_compute[228704]: 2025-11-25 09:49:16.170 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:49:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:49:16 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3521299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:49:16 np0005534696 nova_compute[228704]: 2025-11-25 09:49:16.512 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:49:16 np0005534696 nova_compute[228704]: 2025-11-25 09:49:16.516 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:49:16 np0005534696 nova_compute[228704]: 2025-11-25 09:49:16.555 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:49:16 np0005534696 nova_compute[228704]: 2025-11-25 09:49:16.556 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:49:16 np0005534696 nova_compute[228704]: 2025-11-25 09:49:16.556 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:49:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:16.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:17 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60002740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:17.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:17 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:17 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:18 np0005534696 podman[229606]: 2025-11-25 09:49:18.375802187 +0000 UTC m=+0.086273860 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:49:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:18.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:19 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58003fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:19.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:19 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:19 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Nov 25 04:49:20 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2201242462' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 04:49:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:20.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:21 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:21.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:21 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:21 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:22.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:23 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:23.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:23 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54004f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:23 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54004f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:24 np0005534696 podman[229660]: 2025-11-25 09:49:24.356623118 +0000 UTC m=+0.068041025 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 04:49:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:24.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:25 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:25.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:25 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:25 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:26.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:27 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:49:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:27.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:49:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:27 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:27 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:28.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:29 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:29.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:29 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:29 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:30.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:31 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:31.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:31 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:31 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:32.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:32 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:49:32 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:49:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:33 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:33.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:33 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:49:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:49:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:49:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:49:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:33 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58006060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:34.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:35 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:35.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:35 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:35 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:36 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Nov 25 04:49:36 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3958313225' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 04:49:36 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Nov 25 04:49:36 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2382711148' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 25 04:49:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:36.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:36 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:49:36 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:49:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:37 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d60003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:49:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:37.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:49:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:37 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:37 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d6c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:38.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:39 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d6c001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:39.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:39 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58007160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:39 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:40 np0005534696 podman[229801]: 2025-11-25 09:49:40.334353084 +0000 UTC m=+0.041783396 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 04:49:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:40.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:41 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d700027a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:41.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:41 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d6c0022e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:41 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58007160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:42.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:43 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:43.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:43 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d700032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:43 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d6c0022e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:44.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:45 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58007160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:45.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:45 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:45 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d700032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:46.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:47 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d700032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:47.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:47 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58007160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:47 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:48.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:49 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:49 np0005534696 podman[229851]: 2025-11-25 09:49:49.374773 +0000 UTC m=+0.077459908 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 04:49:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:49.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:49 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d700032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:49 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58007160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:50.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:51 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d6c0032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:51.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:51 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:51 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d70004b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:52.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:53 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58007160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:53.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:53 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58007160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 04:49:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1000117064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:49:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 04:49:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1000117064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:49:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:53 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:54.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:55 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d70004b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:55 np0005534696 podman[229880]: 2025-11-25 09:49:55.346266384 +0000 UTC m=+0.053770432 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Nov 25 04:49:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:55.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:55 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58007160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:49:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:55 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d58007160 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:49:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:56.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:49:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094956 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:49:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:57 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:57.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:57 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d78003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:57 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d78003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:49:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:49:58.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:49:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:59 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d70005850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/094959 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:49:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:49:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:49:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:49:59.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:49:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:59 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d78003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:49:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:49:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:49:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:49:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:49:59 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d6c003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:00.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:00 np0005534696 ceph-mon[75508]: overall HEALTH_OK
Nov 25 04:50:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:01 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:01.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:01 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:01 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d78004ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:02.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:03 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d6c003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:03.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:03 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d70005850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:03 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54005c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:04.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d78004ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:50:05.343 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:50:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:50:05.343 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:50:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:50:05.343 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:50:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:05.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d6c003bf0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:05 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d70005850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:06 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:50:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:06.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:07 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d54008030 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:07.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:07 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d78004ca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:07 np0005534696 kernel: ganesha.nfsd[229794]: segfault at 50 ip 00007f6e149bf32e sp 00007f6de27fb210 error 4 in libntirpc.so.5.8[7f6e149a4000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 25 04:50:07 np0005534696 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 04:50:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[229439]: 25/11/2025 09:50:07 : epoch 69257b85 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6d6c003bf0 fd 38 proxy ignored for local
Nov 25 04:50:07 np0005534696 systemd[1]: Started Process Core Dump (PID 229935/UID 0).
Nov 25 04:50:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:08.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:09 np0005534696 systemd-coredump[229936]: Process 229443 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 54:#012#0  0x00007f6e149bf32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 25 04:50:09 np0005534696 systemd[1]: systemd-coredump@5-229935-0.service: Deactivated successfully.
Nov 25 04:50:09 np0005534696 systemd[1]: systemd-coredump@5-229935-0.service: Consumed 1.179s CPU time.
Nov 25 04:50:09 np0005534696 podman[229942]: 2025-11-25 09:50:09.21683246 +0000 UTC m=+0.022099791 container died 3c7df93273eb0852870cce0748063fbbcefdf9264f43749d30ffeddd3d7935c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:50:09 np0005534696 systemd[1]: var-lib-containers-storage-overlay-ba21640a8bbd3430b55a4c4973a965820b0186c5473bf2b3677fc7d35ae2a09d-merged.mount: Deactivated successfully.
Nov 25 04:50:09 np0005534696 podman[229942]: 2025-11-25 09:50:09.236435845 +0000 UTC m=+0.041703165 container remove 3c7df93273eb0852870cce0748063fbbcefdf9264f43749d30ffeddd3d7935c6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 25 04:50:09 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Main process exited, code=exited, status=139/n/a
Nov 25 04:50:09 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Failed with result 'exit-code'.
Nov 25 04:50:09 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.245s CPU time.
Nov 25 04:50:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:09.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:10.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:11 np0005534696 podman[229976]: 2025-11-25 09:50:11.331186815 +0000 UTC m=+0.043319331 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 04:50:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:11.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:12.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:13.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095013 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:50:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [NOTICE] 328/095013 (4) : haproxy version is 2.3.17-d1c9119
Nov 25 04:50:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [NOTICE] 328/095013 (4) : path to executable is /usr/local/sbin/haproxy
Nov 25 04:50:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [ALERT] 328/095013 (4) : backend 'backend' has no server available!
Nov 25 04:50:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:14.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095014 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:50:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:15.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:16 np0005534696 nova_compute[228704]: 2025-11-25 09:50:16.549 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:50:16 np0005534696 nova_compute[228704]: 2025-11-25 09:50:16.565 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:50:16 np0005534696 nova_compute[228704]: 2025-11-25 09:50:16.566 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:50:16 np0005534696 nova_compute[228704]: 2025-11-25 09:50:16.566 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:50:16 np0005534696 nova_compute[228704]: 2025-11-25 09:50:16.566 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:50:16 np0005534696 nova_compute[228704]: 2025-11-25 09:50:16.582 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:50:16 np0005534696 nova_compute[228704]: 2025-11-25 09:50:16.582 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:50:16 np0005534696 nova_compute[228704]: 2025-11-25 09:50:16.582 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:50:16 np0005534696 nova_compute[228704]: 2025-11-25 09:50:16.583 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:50:16 np0005534696 nova_compute[228704]: 2025-11-25 09:50:16.583 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:50:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:16.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:50:16 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3765179905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:50:16 np0005534696 nova_compute[228704]: 2025-11-25 09:50:16.944 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.174 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.175 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5288MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.175 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.175 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.233 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.233 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.252 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:50:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:17.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:50:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2335408978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.600 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.604 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.617 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.618 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:50:17 np0005534696 nova_compute[228704]: 2025-11-25 09:50:17.619 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:50:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:18 np0005534696 nova_compute[228704]: 2025-11-25 09:50:18.409 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:50:18 np0005534696 nova_compute[228704]: 2025-11-25 09:50:18.410 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:50:18 np0005534696 nova_compute[228704]: 2025-11-25 09:50:18.410 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:50:18 np0005534696 nova_compute[228704]: 2025-11-25 09:50:18.410 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:50:18 np0005534696 nova_compute[228704]: 2025-11-25 09:50:18.424 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:50:18 np0005534696 nova_compute[228704]: 2025-11-25 09:50:18.424 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:50:18 np0005534696 nova_compute[228704]: 2025-11-25 09:50:18.425 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:50:18 np0005534696 nova_compute[228704]: 2025-11-25 09:50:18.425 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:50:18 np0005534696 nova_compute[228704]: 2025-11-25 09:50:18.425 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:50:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:18.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095019 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:50:19 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Scheduled restart job, restart counter is at 6.
Nov 25 04:50:19 np0005534696 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:50:19 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.245s CPU time.
Nov 25 04:50:19 np0005534696 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:50:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:19.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:19 np0005534696 podman[230045]: 2025-11-25 09:50:19.615142414 +0000 UTC m=+0.070878684 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:50:19 np0005534696 podman[230105]: 2025-11-25 09:50:19.709286083 +0000 UTC m=+0.034861367 container create fd8240e459fddc6848797510e3a8300697f058ed8408bf23bb5d8cd061af810d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 25 04:50:19 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc448d3f556d5818b83cb75e95235960c19bfc17f43764debb5a1ce42632adba/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 04:50:19 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc448d3f556d5818b83cb75e95235960c19bfc17f43764debb5a1ce42632adba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:50:19 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc448d3f556d5818b83cb75e95235960c19bfc17f43764debb5a1ce42632adba/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:50:19 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc448d3f556d5818b83cb75e95235960c19bfc17f43764debb5a1ce42632adba/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.jouchy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:50:19 np0005534696 podman[230105]: 2025-11-25 09:50:19.75845486 +0000 UTC m=+0.084030165 container init fd8240e459fddc6848797510e3a8300697f058ed8408bf23bb5d8cd061af810d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 04:50:19 np0005534696 podman[230105]: 2025-11-25 09:50:19.764425506 +0000 UTC m=+0.090000790 container start fd8240e459fddc6848797510e3a8300697f058ed8408bf23bb5d8cd061af810d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:50:19 np0005534696 bash[230105]: fd8240e459fddc6848797510e3a8300697f058ed8408bf23bb5d8cd061af810d
Nov 25 04:50:19 np0005534696 podman[230105]: 2025-11-25 09:50:19.695261816 +0000 UTC m=+0.020837111 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:50:19 np0005534696 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:50:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:19 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 04:50:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:19 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 04:50:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:19 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 04:50:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:19 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 04:50:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:19 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 04:50:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:19 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 04:50:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:19 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 04:50:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:19 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:50:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:20.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:20 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:50:20.959 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:50:20 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:50:20.960 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:50:20 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:50:20.961 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:50:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:21.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:22.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:23.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:24.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:25.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:25 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:50:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:25 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:50:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:26 np0005534696 podman[230192]: 2025-11-25 09:50:26.338225247 +0000 UTC m=+0.046529142 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 25 04:50:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:26.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:27.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:28.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:29.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:30.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:31.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9094000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:32.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:33 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088001e10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:33.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:33 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095033 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:50:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:33 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:34.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:35 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9094001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:35.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:35 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:35 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:36.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:37 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c002b50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:37 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:50:37 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:50:37 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:50:37 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:50:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:37.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:37 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9094001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:37 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c003470 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:38.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:39 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:39.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:39 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c003470 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:39 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9094001d70 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:40 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:50:40 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:50:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:40.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:41 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c003470 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:41.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:41 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:41 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004620 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:42 np0005534696 podman[230370]: 2025-11-25 09:50:42.329086636 +0000 UTC m=+0.038681810 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:50:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:42.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:43 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90940091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:43.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:43 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004620 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:43 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:44.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:45 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:45.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:45 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90940091b0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:45 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c0056c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:46.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:47 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:47.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:47 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c0056c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:47 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c0056c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:48.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:49 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c0056c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:49.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:49 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:49 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:50 np0005534696 podman[230395]: 2025-11-25 09:50:50.345883229 +0000 UTC m=+0.054747967 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 04:50:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:50.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:51 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:50:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:51.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:50:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:51 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:51 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:52.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:53 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:53.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:53 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 04:50:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1004775845' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:50:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 04:50:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1004775845' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:50:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:53 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:54.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:55 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:55.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:55 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:50:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:55 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:56.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:57 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:57 np0005534696 podman[230425]: 2025-11-25 09:50:57.332039887 +0000 UTC m=+0.041912400 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:50:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095057 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:50:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:57.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:57 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.867756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257867783, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1357, "num_deletes": 255, "total_data_size": 3279593, "memory_usage": 3328336, "flush_reason": "Manual Compaction"}
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257873129, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 2128333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18899, "largest_seqno": 20251, "table_properties": {"data_size": 2122606, "index_size": 3054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12091, "raw_average_key_size": 18, "raw_value_size": 2110976, "raw_average_value_size": 3308, "num_data_blocks": 137, "num_entries": 638, "num_filter_entries": 638, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064148, "oldest_key_time": 1764064148, "file_creation_time": 1764064257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5397 microseconds, and 4075 cpu microseconds.
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.873154) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 2128333 bytes OK
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.873165) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.873520) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.873531) EVENT_LOG_v1 {"time_micros": 1764064257873528, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.873539) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3273163, prev total WAL file size 3273163, number of live WAL files 2.
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.874136) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(2078KB)], [33(11MB)]
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257874169, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14213745, "oldest_snapshot_seqno": -1}
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5087 keys, 13751445 bytes, temperature: kUnknown
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257907276, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 13751445, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13715785, "index_size": 21854, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 128504, "raw_average_key_size": 25, "raw_value_size": 13621997, "raw_average_value_size": 2677, "num_data_blocks": 904, "num_entries": 5087, "num_filter_entries": 5087, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764064257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.907425) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13751445 bytes
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.908004) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 428.7 rd, 414.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.5 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(13.1) write-amplify(6.5) OK, records in: 5611, records dropped: 524 output_compression: NoCompression
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.908017) EVENT_LOG_v1 {"time_micros": 1764064257908011, "job": 18, "event": "compaction_finished", "compaction_time_micros": 33158, "compaction_time_cpu_micros": 19936, "output_level": 6, "num_output_files": 1, "total_output_size": 13751445, "num_input_records": 5611, "num_output_records": 5087, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257908486, "job": 18, "event": "table_file_deletion", "file_number": 35}
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064257910140, "job": 18, "event": "table_file_deletion", "file_number": 33}
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.874081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.910179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.910181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.910182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.910183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:50:57 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:50:57.910184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:50:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:57 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:50:58.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:59 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f909400a2d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:50:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:50:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:50:59.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:50:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:59 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:50:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:50:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:50:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:50:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:50:59 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:00.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:01 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:01.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:01 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f909400add0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:01 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:51:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:02.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:51:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:03 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:51:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:03.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:51:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:03 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:03 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90a4003820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:04.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:05 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f909400add0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:51:05.344 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:51:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:51:05.344 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:51:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:51:05.344 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:51:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:51:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:05.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:51:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:05 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:05 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:51:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:05 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:06.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:07 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90a4004360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:07.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:07 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f909400add0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:07 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f909400add0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:08.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:08 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:51:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:08 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:51:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:08 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:51:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:09 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:09.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:09 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:09 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:10.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:11 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90a4004c80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:11.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:11 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f909400c2c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:11 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:51:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:11 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:12.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:13 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:13 np0005534696 podman[230484]: 2025-11-25 09:51:13.32412254 +0000 UTC m=+0.035655094 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 04:51:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:13.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:13 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90a4004c80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:13 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f909400c2c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:14.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:15 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.378 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.378 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:51:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:15.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:51:15 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3595711511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.705 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:51:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:15 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.882 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.884 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5268MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.884 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.885 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:51:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.955 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.955 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:51:15 np0005534696 nova_compute[228704]: 2025-11-25 09:51:15.970 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:51:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:15 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90a4004c80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:51:16 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4009478358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:51:16 np0005534696 nova_compute[228704]: 2025-11-25 09:51:16.300 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:51:16 np0005534696 nova_compute[228704]: 2025-11-25 09:51:16.303 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:51:16 np0005534696 nova_compute[228704]: 2025-11-25 09:51:16.329 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:51:16 np0005534696 nova_compute[228704]: 2025-11-25 09:51:16.330 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:51:16 np0005534696 nova_compute[228704]: 2025-11-25 09:51:16.330 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:51:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:16.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:17 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f909400c2c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:17 np0005534696 nova_compute[228704]: 2025-11-25 09:51:17.330 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:51:17 np0005534696 nova_compute[228704]: 2025-11-25 09:51:17.330 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:51:17 np0005534696 nova_compute[228704]: 2025-11-25 09:51:17.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:51:17 np0005534696 nova_compute[228704]: 2025-11-25 09:51:17.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:51:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095117 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:51:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:17.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:17 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:17 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:18 np0005534696 nova_compute[228704]: 2025-11-25 09:51:18.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:51:18 np0005534696 nova_compute[228704]: 2025-11-25 09:51:18.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:51:18 np0005534696 nova_compute[228704]: 2025-11-25 09:51:18.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:51:18 np0005534696 nova_compute[228704]: 2025-11-25 09:51:18.440 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:51:18 np0005534696 nova_compute[228704]: 2025-11-25 09:51:18.440 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:51:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:18.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:19 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90a4005fb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:19 np0005534696 nova_compute[228704]: 2025-11-25 09:51:19.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:51:19 np0005534696 nova_compute[228704]: 2025-11-25 09:51:19.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:51:19 np0005534696 nova_compute[228704]: 2025-11-25 09:51:19.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:51:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:51:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:19.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:51:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:19 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f909400d360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:19 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:20.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:21 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:21 np0005534696 podman[230566]: 2025-11-25 09:51:21.342724806 +0000 UTC m=+0.057550018 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 04:51:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:21.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:21 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:21 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f909400d360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:22.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:23 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:23.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:23 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90a4005fb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:23 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:24.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:25 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f909400d360 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:25.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:25 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:25 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90a4006cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:26.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:27 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:27.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:27 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c006bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:27 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac002600 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:28 np0005534696 podman[230608]: 2025-11-25 09:51:28.329071452 +0000 UTC m=+0.038384770 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 04:51:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:28.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:29 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90a4006cc0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:29.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:29 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9094001340 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:29 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c0078c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:30.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c0078c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:31.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c0078c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:31 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c0078c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:32.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:33 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c0078c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:33.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:33 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c0078c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:33 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c0078c0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:34.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:35 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:35.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:35 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:35 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b8002600 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:36.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:37 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac003140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:37.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:37 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b00bf560 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:37 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:38.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:39 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b8005500 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:39.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:39 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac003a60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:39 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b00c0080 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:40.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:41 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:41.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:41 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:51:41 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:51:41 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:51:41 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:51:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:41 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b8005500 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:41 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac003a60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:51:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:42.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:51:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:43 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac003a60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:43.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:43 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac003a60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:43 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b00c0080 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:44 np0005534696 podman[230747]: 2025-11-25 09:51:44.327157182 +0000 UTC m=+0.039334380 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 04:51:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:51:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:51:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:44.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:45 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b8006210 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:45.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:45 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac003a60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:45 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac003a60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=404 latency=0.002000020s ======
Nov 25 04:51:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:46.111 +0000] "GET /healthcheck HTTP/1.1" 404 242 - "python-urllib3/1.26.5" - latency=0.002000020s
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.747891) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306747906, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 737, "num_deletes": 251, "total_data_size": 1406210, "memory_usage": 1419968, "flush_reason": "Manual Compaction"}
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306751141, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 928493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20256, "largest_seqno": 20988, "table_properties": {"data_size": 924953, "index_size": 1384, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8273, "raw_average_key_size": 19, "raw_value_size": 917791, "raw_average_value_size": 2159, "num_data_blocks": 61, "num_entries": 425, "num_filter_entries": 425, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064258, "oldest_key_time": 1764064258, "file_creation_time": 1764064306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3302 microseconds, and 2340 cpu microseconds.
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.751192) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 928493 bytes OK
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.751202) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.751650) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.751661) EVENT_LOG_v1 {"time_micros": 1764064306751658, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.751667) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1402276, prev total WAL file size 1402276, number of live WAL files 2.
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.752153) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(906KB)], [36(13MB)]
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306752168, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 14679938, "oldest_snapshot_seqno": -1}
Nov 25 04:51:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:46.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4996 keys, 12485690 bytes, temperature: kUnknown
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306779043, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 12485690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12451650, "index_size": 20426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 127265, "raw_average_key_size": 25, "raw_value_size": 12360381, "raw_average_value_size": 2474, "num_data_blocks": 840, "num_entries": 4996, "num_filter_entries": 4996, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764064306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.779181) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12485690 bytes
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.780493) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 545.4 rd, 463.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.1 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(29.3) write-amplify(13.4) OK, records in: 5512, records dropped: 516 output_compression: NoCompression
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.780507) EVENT_LOG_v1 {"time_micros": 1764064306780500, "job": 20, "event": "compaction_finished", "compaction_time_micros": 26915, "compaction_time_cpu_micros": 18667, "output_level": 6, "num_output_files": 1, "total_output_size": 12485690, "num_input_records": 5512, "num_output_records": 4996, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306780696, "job": 20, "event": "table_file_deletion", "file_number": 38}
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064306782253, "job": 20, "event": "table_file_deletion", "file_number": 36}
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.751946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.782291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.782295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.782296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.782298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:51:46 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:51:46.782299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:51:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:47 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b00c0080 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:47.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:47 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b8006210 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:47 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac003a60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:48.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:49 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac003a60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:49.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:49 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e137 e137: 3 total, 3 up, 3 in
Nov 25 04:51:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:49 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac003a60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:50 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac003a60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:50.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e138 e138: 3 total, 3 up, 3 in
Nov 25 04:51:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:51 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b8006f20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:51.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:51 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e139 e139: 3 total, 3 up, 3 in
Nov 25 04:51:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:51 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:52 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:52 np0005534696 podman[230796]: 2025-11-25 09:51:52.343878405 +0000 UTC m=+0.057492650 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 04:51:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:52.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:52 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e140 e140: 3 total, 3 up, 3 in
Nov 25 04:51:52 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e141 e141: 3 total, 3 up, 3 in
Nov 25 04:51:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:53 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:53.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:53 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b8006f20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 04:51:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3337901668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:51:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 04:51:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3337901668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:51:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:54 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:54.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:55 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac005570 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095155 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:51:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:51:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:55.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:51:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:55 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:51:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:56 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:56.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:57 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:51:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:57.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:51:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:57 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac005570 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:58 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b8006f20 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:51:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:51:58.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:51:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:59 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:59 np0005534696 podman[230826]: 2025-11-25 09:51:59.328104498 +0000 UTC m=+0.039775300 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 04:51:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:51:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:51:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:51:59.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:51:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:51:59 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:51:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:51:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:51:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:51:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:00 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac005570 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:00.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:01 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b80087a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:01.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:01 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:02 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:02.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:03 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f908c004f40 fd 47 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:03.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:03 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:52:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:03 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b80087a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:04 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:04.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095204 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:52:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:05 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:52:05.345 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:52:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:52:05.346 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:52:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:52:05.346 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:52:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:05.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:05 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac005570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:06 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90940022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:06 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:52:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:06 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:52:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:06.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:07 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b80087a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:07.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:07 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:08 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac005570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:52:08.497 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:52:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:52:08.497 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:52:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:08.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:09 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90940022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:09 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:52:09.499 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:52:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:09.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:09 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b80087a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:10 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:10.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:11 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac005570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:11 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:52:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:11 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:52:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:11.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:11 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90940022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:12 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90b80087c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:12.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:13 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9088002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:13.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:13 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90ac005570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:14 np0005534696 kernel: ganesha.nfsd[230874]: segfault at 50 ip 00007f914062a32e sp 00007f90fd7f9210 error 4 in libntirpc.so.5.8[7f914060f000+2c000] likely on CPU 1 (core 0, socket 1)
Nov 25 04:52:14 np0005534696 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 04:52:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[230117]: 25/11/2025 09:52:14 : epoch 69257bdb : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f90940022a0 fd 48 proxy ignored for local
Nov 25 04:52:14 np0005534696 systemd[1]: Started Process Core Dump (PID 230884/UID 0).
Nov 25 04:52:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:14.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:14 np0005534696 systemd-coredump[230885]: Process 230122 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 60:#012#0  0x00007f914062a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 25 04:52:15 np0005534696 systemd[1]: systemd-coredump@6-230884-0.service: Deactivated successfully.
Nov 25 04:52:15 np0005534696 podman[230891]: 2025-11-25 09:52:15.070715947 +0000 UTC m=+0.025508641 container died fd8240e459fddc6848797510e3a8300697f058ed8408bf23bb5d8cd061af810d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 25 04:52:15 np0005534696 systemd[1]: var-lib-containers-storage-overlay-cc448d3f556d5818b83cb75e95235960c19bfc17f43764debb5a1ce42632adba-merged.mount: Deactivated successfully.
Nov 25 04:52:15 np0005534696 podman[230891]: 2025-11-25 09:52:15.089747333 +0000 UTC m=+0.044540027 container remove fd8240e459fddc6848797510e3a8300697f058ed8408bf23bb5d8cd061af810d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:52:15 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Main process exited, code=exited, status=139/n/a
Nov 25 04:52:15 np0005534696 podman[230890]: 2025-11-25 09:52:15.101153943 +0000 UTC m=+0.046460425 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 04:52:15 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Failed with result 'exit-code'.
Nov 25 04:52:15 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.056s CPU time.
Nov 25 04:52:15 np0005534696 nova_compute[228704]: 2025-11-25 09:52:15.352 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:52:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:15.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:16.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.357 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.374 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.374 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.374 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.374 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.374 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:52:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:52:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1419400791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:52:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:17.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.715 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.909 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.910 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5299MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.911 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.911 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:52:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.973 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.974 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:52:17 np0005534696 nova_compute[228704]: 2025-11-25 09:52:17.988 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:52:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:52:18 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3747932840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:52:18 np0005534696 nova_compute[228704]: 2025-11-25 09:52:18.326 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:52:18 np0005534696 nova_compute[228704]: 2025-11-25 09:52:18.330 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:52:18 np0005534696 nova_compute[228704]: 2025-11-25 09:52:18.343 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:52:18 np0005534696 nova_compute[228704]: 2025-11-25 09:52:18.345 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:52:18 np0005534696 nova_compute[228704]: 2025-11-25 09:52:18.345 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:52:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:18.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:19 np0005534696 nova_compute[228704]: 2025-11-25 09:52:19.345 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:52:19 np0005534696 nova_compute[228704]: 2025-11-25 09:52:19.346 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:52:19 np0005534696 nova_compute[228704]: 2025-11-25 09:52:19.346 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:52:19 np0005534696 nova_compute[228704]: 2025-11-25 09:52:19.351 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:52:19 np0005534696 nova_compute[228704]: 2025-11-25 09:52:19.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:52:19 np0005534696 nova_compute[228704]: 2025-11-25 09:52:19.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:52:19 np0005534696 nova_compute[228704]: 2025-11-25 09:52:19.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:52:19 np0005534696 nova_compute[228704]: 2025-11-25 09:52:19.370 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:52:19 np0005534696 nova_compute[228704]: 2025-11-25 09:52:19.370 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:52:19 np0005534696 nova_compute[228704]: 2025-11-25 09:52:19.371 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:52:19 np0005534696 nova_compute[228704]: 2025-11-25 09:52:19.371 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:52:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095219 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:52:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:19.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095220 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:52:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:52:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:20.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:52:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:52:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:21.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:52:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:22.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:23 np0005534696 podman[231017]: 2025-11-25 09:52:23.350349075 +0000 UTC m=+0.059295788 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 04:52:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:52:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:23.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:52:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:52:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:24.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:52:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:25 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Scheduled restart job, restart counter is at 7.
Nov 25 04:52:25 np0005534696 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:52:25 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.056s CPU time.
Nov 25 04:52:25 np0005534696 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:52:25 np0005534696 podman[231081]: 2025-11-25 09:52:25.446314019 +0000 UTC m=+0.027376490 container create 4b16e2c7ab5313d4b0a2c0091ef1011873f465d6fd9c16f3db1253b399dcde09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 25 04:52:25 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cb4f315f28fde409f85929b64d1b00f0e2c63a6173f193f4f33fd3fddef6cb/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 04:52:25 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cb4f315f28fde409f85929b64d1b00f0e2c63a6173f193f4f33fd3fddef6cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:52:25 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cb4f315f28fde409f85929b64d1b00f0e2c63a6173f193f4f33fd3fddef6cb/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:52:25 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cb4f315f28fde409f85929b64d1b00f0e2c63a6173f193f4f33fd3fddef6cb/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.jouchy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:52:25 np0005534696 podman[231081]: 2025-11-25 09:52:25.490863393 +0000 UTC m=+0.071925874 container init 4b16e2c7ab5313d4b0a2c0091ef1011873f465d6fd9c16f3db1253b399dcde09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:52:25 np0005534696 podman[231081]: 2025-11-25 09:52:25.494564315 +0000 UTC m=+0.075626786 container start 4b16e2c7ab5313d4b0a2c0091ef1011873f465d6fd9c16f3db1253b399dcde09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 25 04:52:25 np0005534696 bash[231081]: 4b16e2c7ab5313d4b0a2c0091ef1011873f465d6fd9c16f3db1253b399dcde09
Nov 25 04:52:25 np0005534696 podman[231081]: 2025-11-25 09:52:25.43528505 +0000 UTC m=+0.016347541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:52:25 np0005534696 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:52:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 04:52:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 04:52:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 04:52:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 04:52:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 04:52:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 04:52:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 04:52:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:52:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:52:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:25.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:52:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:26.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:27.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:28.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:29.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:30 np0005534696 podman[231141]: 2025-11-25 09:52:30.350519598 +0000 UTC m=+0.063193151 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 25 04:52:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e142 e142: 3 total, 3 up, 3 in
Nov 25 04:52:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:30.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 04:52:31 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e143 e143: 3 total, 3 up, 3 in
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:52:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:31.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:32.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:33.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:52:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:34.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:52:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095235 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:52:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:52:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:35.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:52:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:52:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:36.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:52:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000001b:nfs.cephfs.1: -2
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 25 04:52:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:37.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:37 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 e144: 3 total, 3 up, 3 in
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:38 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c8001f90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:52:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:38.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:52:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582eb5de020 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:39.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095240 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:52:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:40 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8d4001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:40.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:41 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c8002ab0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:41.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:41 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582eb5deb20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:52:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:42.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:52:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8d4002760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:43.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8d4002760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:44 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582eb5deb20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:44.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:45 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:45 np0005534696 podman[231293]: 2025-11-25 09:52:45.329265497 +0000 UTC m=+0.040267097 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 04:52:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:52:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:52:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:52:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:52:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:52:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:52:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:52:45 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:52:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:45.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:45 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 04:52:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:45 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8d4002760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8d4002760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:46.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:47 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582eb5deb20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:47.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:47 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:48 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8d4002760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:48.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:49 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:49 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582eb5dbb20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:50 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc0095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:52:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:52:50 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:52:50 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4058 writes, 21K keys, 4058 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s#012Cumulative WAL: 4058 writes, 4058 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1506 writes, 7188 keys, 1506 commit groups, 1.0 writes per commit group, ingest: 16.94 MB, 0.03 MB/s#012Interval WAL: 1506 writes, 1506 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    425.2      0.08              0.05        10    0.008       0      0       0.0       0.0#012  L6      1/0   11.91 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    506.9    427.9      0.26              0.17         9    0.029     42K   4818       0.0       0.0#012 Sum      1/0   11.91 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4    391.8    427.3      0.34              0.22        19    0.018     42K   4818       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.4    400.9    405.7      0.15              0.10         8    0.019     22K   2555       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    506.9    427.9      0.26              0.17         9    0.029     42K   4818       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    429.8      0.08              0.05         9    0.008       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 0.3 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56183fd2f350#2 capacity: 304.00 MB usage: 9.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(485,8.85 MB,2.911%) FilterBlock(19,125.17 KB,0.0402099%) IndexBlock(19,239.66 KB,0.0769866%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 04:52:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:50.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:51 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8d4002760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:51.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:51 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:52 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582eb5dbca0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:52.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:53 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc0095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:53.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:53 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8d40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:54 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:54 np0005534696 podman[231343]: 2025-11-25 09:52:54.363243375 +0000 UTC m=+0.070488846 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:52:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:54.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:55 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:55.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:55 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:52:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:56 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:52:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:56.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:52:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:57 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00a540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:57.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:57 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582eb5dc5c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:58 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:52:58.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:59 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00a540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:52:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:52:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:52:59.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:52:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:52:59 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00a540 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:52:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:52:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:52:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:52:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582eb5dc5c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:53:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:00.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:53:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:01 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:01 np0005534696 podman[231373]: 2025-11-25 09:53:01.331147886 +0000 UTC m=+0.037341566 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 04:53:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:01.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:01 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00afe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:02 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00afe0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:02.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:03 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582eb5dc5c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:53:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:03.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:53:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:03 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:04 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00b900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 25 04:53:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:04.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:05 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00b900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:53:05.346 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:53:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:53:05.347 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:53:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:53:05.347 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:53:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:53:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:05.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:53:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:05 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582eb5dc5c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:06 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:53:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:06.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:53:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:07 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00b900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:07.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:07 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00b900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:08 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00b900 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:08.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:09 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:09.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:09 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:10 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8e8002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:10.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:11 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:11.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:11 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:12 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:53:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:12.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:53:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:13 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:13.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:13 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:14 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:53:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:14.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:53:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:15 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:15 np0005534696 nova_compute[228704]: 2025-11-25 09:53:15.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:15 np0005534696 nova_compute[228704]: 2025-11-25 09:53:15.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 04:53:15 np0005534696 nova_compute[228704]: 2025-11-25 09:53:15.367 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 04:53:15 np0005534696 nova_compute[228704]: 2025-11-25 09:53:15.367 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:15 np0005534696 nova_compute[228704]: 2025-11-25 09:53:15.367 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 04:53:15 np0005534696 nova_compute[228704]: 2025-11-25 09:53:15.377 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 04:53:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7310 writes, 28K keys, 7310 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7310 writes, 1661 syncs, 4.40 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1399 writes, 3452 keys, 1399 commit groups, 1.0 writes per commit group, ingest: 2.70 MB, 0.00 MB/s#012Interval WAL: 1399 writes, 648 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta
Nov 25 04:53:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:15.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:15 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:16 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:16 np0005534696 podman[231436]: 2025-11-25 09:53:16.334314263 +0000 UTC m=+0.045805722 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:53:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:16.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:16 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:53:16.965 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:53:16 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:53:16.966 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:53:16 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:53:16.967 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:53:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:17 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:17 np0005534696 nova_compute[228704]: 2025-11-25 09:53:17.382 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:17 np0005534696 nova_compute[228704]: 2025-11-25 09:53:17.398 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:53:17 np0005534696 nova_compute[228704]: 2025-11-25 09:53:17.398 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:53:17 np0005534696 nova_compute[228704]: 2025-11-25 09:53:17.398 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:53:17 np0005534696 nova_compute[228704]: 2025-11-25 09:53:17.398 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:53:17 np0005534696 nova_compute[228704]: 2025-11-25 09:53:17.398 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:53:17 np0005534696 nova_compute[228704]: 2025-11-25 09:53:17.757 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:53:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:17.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:17 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:17 np0005534696 nova_compute[228704]: 2025-11-25 09:53:17.988 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:53:17 np0005534696 nova_compute[228704]: 2025-11-25 09:53:17.989 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5224MB free_disk=59.92179870605469GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:53:17 np0005534696 nova_compute[228704]: 2025-11-25 09:53:17.989 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:53:17 np0005534696 nova_compute[228704]: 2025-11-25 09:53:17.989 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:53:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:18 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.080 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.080 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.139 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing inventories for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.203 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating ProviderTree inventory for provider e8eea1e0-1833-4152-af65-8b442fac3e0d from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.203 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating inventory in ProviderTree for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.227 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing aggregate associations for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.247 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing trait associations for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d, traits: HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX512VAES,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.264 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:53:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:53:18 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/259020438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.623 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.628 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.641 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.643 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:53:18 np0005534696 nova_compute[228704]: 2025-11-25 09:53:18.643 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:53:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:18.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:19 np0005534696 nova_compute[228704]: 2025-11-25 09:53:19.618 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:19 np0005534696 nova_compute[228704]: 2025-11-25 09:53:19.619 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:19 np0005534696 nova_compute[228704]: 2025-11-25 09:53:19.619 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:19 np0005534696 nova_compute[228704]: 2025-11-25 09:53:19.619 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:53:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:19.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:20 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:20 np0005534696 nova_compute[228704]: 2025-11-25 09:53:20.352 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:20 np0005534696 nova_compute[228704]: 2025-11-25 09:53:20.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:20.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:21 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:21 np0005534696 nova_compute[228704]: 2025-11-25 09:53:21.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:21 np0005534696 nova_compute[228704]: 2025-11-25 09:53:21.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:53:21 np0005534696 nova_compute[228704]: 2025-11-25 09:53:21.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:53:21 np0005534696 nova_compute[228704]: 2025-11-25 09:53:21.374 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:53:21 np0005534696 nova_compute[228704]: 2025-11-25 09:53:21.375 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:21 np0005534696 nova_compute[228704]: 2025-11-25 09:53:21.375 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:53:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:21.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:21 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:22 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00c610 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:22.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:23 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582edb6b5e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:23.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:23 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec004360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:24 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:24.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582edb6b5e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:25 np0005534696 podman[231530]: 2025-11-25 09:53:25.382728891 +0000 UTC m=+0.092588212 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:53:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:25.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:26 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:26.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:27 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:27.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:27 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582edb6b5e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:28 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:28.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:29 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0057d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:29.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:29 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:30 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582edb6b5e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:30.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:31.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0057d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:32 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:32 np0005534696 podman[231560]: 2025-11-25 09:53:32.357288884 +0000 UTC m=+0.069385075 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:53:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:32.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:33 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x5582edb6b5e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:33.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:33 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:34 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:34.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:35 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:35.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:35 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:36 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:36.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:37.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:38 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:38.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:39.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:40 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:40.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:41 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:41.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:41 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8c80033d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:42.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:53:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:43.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:53:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:44 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:44.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:45 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:45.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:45 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8f80bf840 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:53:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:46.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:53:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:47 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:47 np0005534696 podman[231618]: 2025-11-25 09:53:47.328173253 +0000 UTC m=+0.036889082 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:53:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:47.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:47 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:48 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:48.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:49 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8f80c0340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:49.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:49 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:50 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:53:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:53:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:53:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:53:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:53:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:53:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:53:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:50.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:53:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:51 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:51.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:51 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:52 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:52.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:53 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:53.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:53 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:54 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:53:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:53:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:54.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:55 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8f80c0c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095355 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:53:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:55.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:53:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:55 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:56 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:56 np0005534696 podman[231815]: 2025-11-25 09:53:56.345615657 +0000 UTC m=+0.054782461 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 04:53:56 np0005534696 nova_compute[228704]: 2025-11-25 09:53:56.783 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:53:56 np0005534696 nova_compute[228704]: 2025-11-25 09:53:56.784 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:53:56 np0005534696 nova_compute[228704]: 2025-11-25 09:53:56.815 228708 DEBUG nova.compute.manager [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:53:56 np0005534696 nova_compute[228704]: 2025-11-25 09:53:56.868 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:53:56 np0005534696 nova_compute[228704]: 2025-11-25 09:53:56.869 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:53:56 np0005534696 nova_compute[228704]: 2025-11-25 09:53:56.873 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:53:56 np0005534696 nova_compute[228704]: 2025-11-25 09:53:56.873 228708 INFO nova.compute.claims [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 25 04:53:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:56.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:56 np0005534696 nova_compute[228704]: 2025-11-25 09:53:56.959 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:53:57 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:53:57 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2293824702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.292 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:53:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:57 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.297 228708 DEBUG nova.compute.provider_tree [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.307 228708 DEBUG nova.scheduler.client.report [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.328 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.328 228708 DEBUG nova.compute.manager [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.364 228708 DEBUG nova.compute.manager [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.365 228708 DEBUG nova.network.neutron [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.378 228708 INFO nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.389 228708 DEBUG nova.compute.manager [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.465 228708 DEBUG nova.compute.manager [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.466 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.466 228708 INFO nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Creating image(s)#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.484 228708 DEBUG nova.storage.rbd_utils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ca79ca37-186d-411c-b60c-640a85d7c8a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.500 228708 DEBUG nova.storage.rbd_utils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ca79ca37-186d-411c-b60c-640a85d7c8a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.516 228708 DEBUG nova.storage.rbd_utils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ca79ca37-186d-411c-b60c-640a85d7c8a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.518 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.519 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:53:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:57.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:57 np0005534696 nova_compute[228704]: 2025-11-25 09:53:57.898 228708 DEBUG nova.virt.libvirt.imagebackend [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image locations are: [{'url': 'rbd://af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/images/62ddd1b7-1bba-493e-a10f-b03a12ab3457/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://af1c9ae3-08d7-5547-a53d-2cccf7c6ef90/images/62ddd1b7-1bba-493e-a10f-b03a12ab3457/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 25 04:53:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:57 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8f80c0c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.059 228708 WARNING oslo_policy.policy [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.059 228708 WARNING oslo_policy.policy [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.061 228708 DEBUG nova.policy [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:53:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:58 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8f80c0c60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.600 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.644 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.645 228708 DEBUG nova.virt.images [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] 62ddd1b7-1bba-493e-a10f-b03a12ab3457 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.646 228708 DEBUG nova.privsep.utils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.647 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.716 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.part /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.719 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.764 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9.converted --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.765 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.782 228708 DEBUG nova.storage.rbd_utils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ca79ca37-186d-411c-b60c-640a85d7c8a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.784 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 ca79ca37-186d-411c-b60c-640a85d7c8a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:53:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:53:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:53:58.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:53:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.948 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 ca79ca37-186d-411c-b60c-640a85d7c8a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:53:58 np0005534696 nova_compute[228704]: 2025-11-25 09:53:58.991 228708 DEBUG nova.storage.rbd_utils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image ca79ca37-186d-411c-b60c-640a85d7c8a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:53:59 np0005534696 nova_compute[228704]: 2025-11-25 09:53:59.044 228708 DEBUG nova.objects.instance [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid ca79ca37-186d-411c-b60c-640a85d7c8a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:53:59 np0005534696 nova_compute[228704]: 2025-11-25 09:53:59.059 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:53:59 np0005534696 nova_compute[228704]: 2025-11-25 09:53:59.059 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Ensure instance console log exists: /var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:53:59 np0005534696 nova_compute[228704]: 2025-11-25 09:53:59.060 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:53:59 np0005534696 nova_compute[228704]: 2025-11-25 09:53:59.060 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:53:59 np0005534696 nova_compute[228704]: 2025-11-25 09:53:59.060 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:53:59 np0005534696 nova_compute[228704]: 2025-11-25 09:53:59.099 228708 DEBUG nova.network.neutron [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Successfully created port: 21840a13-6732-487c-8048-5f629bcfa4ff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:53:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:59 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:53:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:53:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:53:59.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:53:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:53:59 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:53:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:53:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:53:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:53:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:00 np0005534696 nova_compute[228704]: 2025-11-25 09:54:00.362 228708 DEBUG nova.network.neutron [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Successfully updated port: 21840a13-6732-487c-8048-5f629bcfa4ff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:54:00 np0005534696 nova_compute[228704]: 2025-11-25 09:54:00.375 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:54:00 np0005534696 nova_compute[228704]: 2025-11-25 09:54:00.375 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:54:00 np0005534696 nova_compute[228704]: 2025-11-25 09:54:00.375 228708 DEBUG nova.network.neutron [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:54:00 np0005534696 nova_compute[228704]: 2025-11-25 09:54:00.448 228708 DEBUG nova.compute.manager [req-917412fa-c7b6-4c4c-bb4c-94003ac48457 req-3c263fb9-b0ce-4d29-8e7b-873b246323fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-changed-21840a13-6732-487c-8048-5f629bcfa4ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:00 np0005534696 nova_compute[228704]: 2025-11-25 09:54:00.449 228708 DEBUG nova.compute.manager [req-917412fa-c7b6-4c4c-bb4c-94003ac48457 req-3c263fb9-b0ce-4d29-8e7b-873b246323fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Refreshing instance network info cache due to event network-changed-21840a13-6732-487c-8048-5f629bcfa4ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:54:00 np0005534696 nova_compute[228704]: 2025-11-25 09:54:00.449 228708 DEBUG oslo_concurrency.lockutils [req-917412fa-c7b6-4c4c-bb4c-94003ac48457 req-3c263fb9-b0ce-4d29-8e7b-873b246323fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:54:00 np0005534696 nova_compute[228704]: 2025-11-25 09:54:00.509 228708 DEBUG nova.network.neutron [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:54:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:00.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:01 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.403 228708 DEBUG nova.network.neutron [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updating instance_info_cache with network_info: [{"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.417 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.417 228708 DEBUG nova.compute.manager [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Instance network_info: |[{"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.418 228708 DEBUG oslo_concurrency.lockutils [req-917412fa-c7b6-4c4c-bb4c-94003ac48457 req-3c263fb9-b0ce-4d29-8e7b-873b246323fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.418 228708 DEBUG nova.network.neutron [req-917412fa-c7b6-4c4c-bb4c-94003ac48457 req-3c263fb9-b0ce-4d29-8e7b-873b246323fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Refreshing network info cache for port 21840a13-6732-487c-8048-5f629bcfa4ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.420 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Start _get_guest_xml network_info=[{"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.423 228708 WARNING nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.427 228708 DEBUG nova.virt.libvirt.host [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.427 228708 DEBUG nova.virt.libvirt.host [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.432 228708 DEBUG nova.virt.libvirt.host [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.432 228708 DEBUG nova.virt.libvirt.host [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.433 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.433 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.433 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.433 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.434 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.434 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.434 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.434 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.434 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.435 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.435 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.435 228708 DEBUG nova.virt.hardware [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.438 228708 DEBUG nova.privsep.utils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.438 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:54:01 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 04:54:01 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3719102998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.773 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.792 228708 DEBUG nova.storage.rbd_utils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ca79ca37-186d-411c-b60c-640a85d7c8a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:54:01 np0005534696 nova_compute[228704]: 2025-11-25 09:54:01.794 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:54:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:01.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:01 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:02 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9040023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 04:54:02 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/890083397' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.127 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.128 228708 DEBUG nova.virt.libvirt.vif [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:53:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-390927783',display_name='tempest-TestNetworkBasicOps-server-390927783',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-390927783',id=3,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/LOSsuqS79AZmC6VwV2XH9CWUBaXJh1TZaGbQ6eYP2j6spxOrdg2cOeHSfAGVBq21aZfYvQ6caaSpZDGxI5QanjNZPSsJ3dPGHUybUeoJjsrYJPbKSgEEOXBfITkpdaw==',key_name='tempest-TestNetworkBasicOps-244155696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-94zvllik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:53:57Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=ca79ca37-186d-411c-b60c-640a85d7c8a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.129 228708 DEBUG nova.network.os_vif_util [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.130 228708 DEBUG nova.network.os_vif_util [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=21840a13-6732-487c-8048-5f629bcfa4ff,network=Network(de5dde40-3ef0-4c85-b48a-62ea2f4c04e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21840a13-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.131 228708 DEBUG nova.objects.instance [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid ca79ca37-186d-411c-b60c-640a85d7c8a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.147 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  <uuid>ca79ca37-186d-411c-b60c-640a85d7c8a0</uuid>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  <name>instance-00000003</name>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  <memory>131072</memory>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  <vcpu>1</vcpu>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  <metadata>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <nova:name>tempest-TestNetworkBasicOps-server-390927783</nova:name>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <nova:creationTime>2025-11-25 09:54:01</nova:creationTime>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <nova:flavor name="m1.nano">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <nova:memory>128</nova:memory>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <nova:disk>1</nova:disk>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <nova:swap>0</nova:swap>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      </nova:flavor>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <nova:owner>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      </nova:owner>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <nova:ports>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <nova:port uuid="21840a13-6732-487c-8048-5f629bcfa4ff">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        </nova:port>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      </nova:ports>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    </nova:instance>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  </metadata>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  <sysinfo type="smbios">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <system>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <entry name="serial">ca79ca37-186d-411c-b60c-640a85d7c8a0</entry>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <entry name="uuid">ca79ca37-186d-411c-b60c-640a85d7c8a0</entry>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    </system>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  </sysinfo>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  <os>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <boot dev="hd"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <smbios mode="sysinfo"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  </os>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  <features>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <acpi/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <apic/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <vmcoreinfo/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  </features>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  <clock offset="utc">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <timer name="hpet" present="no"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  </clock>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  <cpu mode="host-model" match="exact">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  </cpu>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  <devices>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <disk type="network" device="disk">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <driver type="raw" cache="none"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <source protocol="rbd" name="vms/ca79ca37-186d-411c-b60c-640a85d7c8a0_disk">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <host name="192.168.122.102" port="6789"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <host name="192.168.122.101" port="6789"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      </source>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <auth username="openstack">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      </auth>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <target dev="vda" bus="virtio"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <disk type="network" device="cdrom">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <driver type="raw" cache="none"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <source protocol="rbd" name="vms/ca79ca37-186d-411c-b60c-640a85d7c8a0_disk.config">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <host name="192.168.122.102" port="6789"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <host name="192.168.122.101" port="6789"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      </source>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <auth username="openstack">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:        <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      </auth>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <target dev="sda" bus="sata"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <interface type="ethernet">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <mac address="fa:16:3e:c5:3f:c2"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <model type="virtio"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <mtu size="1442"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <target dev="tap21840a13-67"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    </interface>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <serial type="pty">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <log file="/var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/console.log" append="off"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    </serial>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <video>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <model type="virtio"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    </video>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <input type="tablet" bus="usb"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <rng model="virtio">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    </rng>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <controller type="usb" index="0"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    <memballoon model="virtio">
Nov 25 04:54:02 np0005534696 nova_compute[228704]:      <stats period="10"/>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:    </memballoon>
Nov 25 04:54:02 np0005534696 nova_compute[228704]:  </devices>
Nov 25 04:54:02 np0005534696 nova_compute[228704]: </domain>
Nov 25 04:54:02 np0005534696 nova_compute[228704]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.148 228708 DEBUG nova.compute.manager [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Preparing to wait for external event network-vif-plugged-21840a13-6732-487c-8048-5f629bcfa4ff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.148 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.148 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.149 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.149 228708 DEBUG nova.virt.libvirt.vif [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:53:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-390927783',display_name='tempest-TestNetworkBasicOps-server-390927783',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-390927783',id=3,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/LOSsuqS79AZmC6VwV2XH9CWUBaXJh1TZaGbQ6eYP2j6spxOrdg2cOeHSfAGVBq21aZfYvQ6caaSpZDGxI5QanjNZPSsJ3dPGHUybUeoJjsrYJPbKSgEEOXBfITkpdaw==',key_name='tempest-TestNetworkBasicOps-244155696',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-94zvllik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:53:57Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=ca79ca37-186d-411c-b60c-640a85d7c8a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.149 228708 DEBUG nova.network.os_vif_util [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.150 228708 DEBUG nova.network.os_vif_util [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=21840a13-6732-487c-8048-5f629bcfa4ff,network=Network(de5dde40-3ef0-4c85-b48a-62ea2f4c04e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21840a13-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.150 228708 DEBUG os_vif [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=21840a13-6732-487c-8048-5f629bcfa4ff,network=Network(de5dde40-3ef0-4c85-b48a-62ea2f4c04e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21840a13-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.177 228708 DEBUG ovsdbapp.backend.ovs_idl [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.177 228708 DEBUG ovsdbapp.backend.ovs_idl [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.177 228708 DEBUG ovsdbapp.backend.ovs_idl [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.178 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.178 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.178 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.179 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.180 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.181 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.189 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.190 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.190 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.191 228708 INFO oslo.privsep.daemon [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp2s23642t/privsep.sock']#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.730 228708 INFO oslo.privsep.daemon [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.655 232137 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.657 232137 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.659 232137 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.659 232137 INFO oslo.privsep.daemon [-] privsep daemon running as pid 232137#033[00m
Nov 25 04:54:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:02.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.975 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.976 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21840a13-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.976 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21840a13-67, col_values=(('external_ids', {'iface-id': '21840a13-6732-487c-8048-5f629bcfa4ff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:3f:c2', 'vm-uuid': 'ca79ca37-186d-411c-b60c-640a85d7c8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.978 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:02 np0005534696 NetworkManager[48892]: <info>  [1764064442.9787] manager: (tap21840a13-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.980 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.982 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:02 np0005534696 nova_compute[228704]: 2025-11-25 09:54:02.982 228708 INFO os_vif [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=21840a13-6732-487c-8048-5f629bcfa4ff,network=Network(de5dde40-3ef0-4c85-b48a-62ea2f4c04e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21840a13-67')#033[00m
Nov 25 04:54:03 np0005534696 nova_compute[228704]: 2025-11-25 09:54:03.140 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:54:03 np0005534696 nova_compute[228704]: 2025-11-25 09:54:03.140 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:54:03 np0005534696 nova_compute[228704]: 2025-11-25 09:54:03.140 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:c5:3f:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:54:03 np0005534696 nova_compute[228704]: 2025-11-25 09:54:03.141 228708 INFO nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Using config drive#033[00m
Nov 25 04:54:03 np0005534696 nova_compute[228704]: 2025-11-25 09:54:03.157 228708 DEBUG nova.storage.rbd_utils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ca79ca37-186d-411c-b60c-640a85d7c8a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:54:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:03 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d710 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:03 np0005534696 podman[232162]: 2025-11-25 09:54:03.328515281 +0000 UTC m=+0.040035181 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 25 04:54:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:54:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:03.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:54:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:03 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec0064e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:04 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900005370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.209 228708 DEBUG nova.network.neutron [req-917412fa-c7b6-4c4c-bb4c-94003ac48457 req-3c263fb9-b0ce-4d29-8e7b-873b246323fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updated VIF entry in instance network info cache for port 21840a13-6732-487c-8048-5f629bcfa4ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.209 228708 DEBUG nova.network.neutron [req-917412fa-c7b6-4c4c-bb4c-94003ac48457 req-3c263fb9-b0ce-4d29-8e7b-873b246323fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updating instance_info_cache with network_info: [{"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.221 228708 DEBUG oslo_concurrency.lockutils [req-917412fa-c7b6-4c4c-bb4c-94003ac48457 req-3c263fb9-b0ce-4d29-8e7b-873b246323fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:54:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:04 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.350 228708 INFO nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Creating config drive at /var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/disk.config#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.354 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9htc0usg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.474 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9htc0usg" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.490 228708 DEBUG nova.storage.rbd_utils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image ca79ca37-186d-411c-b60c-640a85d7c8a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.492 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/disk.config ca79ca37-186d-411c-b60c-640a85d7c8a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.561 228708 DEBUG oslo_concurrency.processutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/disk.config ca79ca37-186d-411c-b60c-640a85d7c8a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.562 228708 INFO nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Deleting local config drive /var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/disk.config because it was imported into RBD.#033[00m
Nov 25 04:54:04 np0005534696 systemd[1]: Starting libvirt secret daemon...
Nov 25 04:54:04 np0005534696 systemd[1]: Started libvirt secret daemon.
Nov 25 04:54:04 np0005534696 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 25 04:54:04 np0005534696 kernel: tap21840a13-67: entered promiscuous mode
Nov 25 04:54:04 np0005534696 NetworkManager[48892]: <info>  [1764064444.6265] manager: (tap21840a13-67): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Nov 25 04:54:04 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:04Z|00027|binding|INFO|Claiming lport 21840a13-6732-487c-8048-5f629bcfa4ff for this chassis.
Nov 25 04:54:04 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:04Z|00028|binding|INFO|21840a13-6732-487c-8048-5f629bcfa4ff: Claiming fa:16:3e:c5:3f:c2 10.100.0.7
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.636 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:04 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:04.641 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:3f:c2 10.100.0.7'], port_security=['fa:16:3e:c5:3f:c2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ca79ca37-186d-411c-b60c-640a85d7c8a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f229e11a-d1eb-4c26-8d60-021e4739f1f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2598d6f-b466-4648-90e5-665655c38fd2, chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], logical_port=21840a13-6732-487c-8048-5f629bcfa4ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:54:04 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:04.642 142676 INFO neutron.agent.ovn.metadata.agent [-] Port 21840a13-6732-487c-8048-5f629bcfa4ff in datapath de5dde40-3ef0-4c85-b48a-62ea2f4c04e7 bound to our chassis#033[00m
Nov 25 04:54:04 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:04.644 142676 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de5dde40-3ef0-4c85-b48a-62ea2f4c04e7#033[00m
Nov 25 04:54:04 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:04.645 142676 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpquhvbwuh/privsep.sock']#033[00m
Nov 25 04:54:04 np0005534696 systemd-udevd[232257]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:54:04 np0005534696 systemd-machined[192760]: New machine qemu-1-instance-00000003.
Nov 25 04:54:04 np0005534696 systemd[1]: Started Virtual Machine qemu-1-instance-00000003.
Nov 25 04:54:04 np0005534696 NetworkManager[48892]: <info>  [1764064444.6792] device (tap21840a13-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:54:04 np0005534696 NetworkManager[48892]: <info>  [1764064444.6801] device (tap21840a13-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.709 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:04 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:04Z|00029|binding|INFO|Setting lport 21840a13-6732-487c-8048-5f629bcfa4ff ovn-installed in OVS
Nov 25 04:54:04 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:04Z|00030|binding|INFO|Setting lport 21840a13-6732-487c-8048-5f629bcfa4ff up in Southbound
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.715 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:04.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.907 228708 DEBUG nova.compute.manager [req-94f68ad9-914c-4722-99bc-117db984f33a req-0358c319-0758-4d6f-9651-b547b05f7391 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-vif-plugged-21840a13-6732-487c-8048-5f629bcfa4ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.908 228708 DEBUG oslo_concurrency.lockutils [req-94f68ad9-914c-4722-99bc-117db984f33a req-0358c319-0758-4d6f-9651-b547b05f7391 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.908 228708 DEBUG oslo_concurrency.lockutils [req-94f68ad9-914c-4722-99bc-117db984f33a req-0358c319-0758-4d6f-9651-b547b05f7391 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.908 228708 DEBUG oslo_concurrency.lockutils [req-94f68ad9-914c-4722-99bc-117db984f33a req-0358c319-0758-4d6f-9651-b547b05f7391 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.908 228708 DEBUG nova.compute.manager [req-94f68ad9-914c-4722-99bc-117db984f33a req-0358c319-0758-4d6f-9651-b547b05f7391 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Processing event network-vif-plugged-21840a13-6732-487c-8048-5f629bcfa4ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:54:04 np0005534696 nova_compute[228704]: 2025-11-25 09:54:04.909 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.182 142676 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.183 142676 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpquhvbwuh/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.097 232274 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.100 232274 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.102 232274 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.102 232274 INFO oslo.privsep.daemon [-] privsep daemon running as pid 232274#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.185 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[44ebbf3b-17f6-479c-bd0f-83c58ade3924]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.231 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064445.2300363, ca79ca37-186d-411c-b60c-640a85d7c8a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.232 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] VM Started (Lifecycle Event)#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.236 228708 DEBUG nova.compute.manager [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.244 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.248 228708 INFO nova.virt.libvirt.driver [-] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Instance spawned successfully.#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.248 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.260 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.265 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.268 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.268 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.269 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.269 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.270 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.270 228708 DEBUG nova.virt.libvirt.driver [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.276 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.277 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064445.2318347, ca79ca37-186d-411c-b60c-640a85d7c8a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.277 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.288 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.294 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064445.2380116, ca79ca37-186d-411c-b60c-640a85d7c8a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.294 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:54:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:05 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904002f80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.319 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.321 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.342 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.347 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.348 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.348 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.351 228708 INFO nova.compute.manager [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Took 7.89 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.352 228708 DEBUG nova.compute.manager [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.401 228708 INFO nova.compute.manager [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Took 8.55 seconds to build instance.#033[00m
Nov 25 04:54:05 np0005534696 nova_compute[228704]: 2025-11-25 09:54:05.411 228708 DEBUG oslo_concurrency.lockutils [None req-cf34b357-4254-4770-a619-66e26226c62b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.659 232274 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.659 232274 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:05.659 232274 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:05.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:05 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:06 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.236 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[861e6aac-bdba-4e24-84ae-c59a4032bda3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.237 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapde5dde40-31 in ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.239 232274 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapde5dde40-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.239 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[987628b1-08b0-4f37-8854-c16a4d3eb4ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.242 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1c786c-5922-439c-9dee-2eaa8ea6d9ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.262 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfa6337-0c23-405f-a89e-f532a3a60075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.274 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[a6afe6c1-5383-4836-8b2a-981796282fa4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.276 142676 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpomvdxtp6/privsep.sock']#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.801 142676 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.802 142676 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpomvdxtp6/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.721 232331 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.724 232331 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.726 232331 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.726 232331 INFO oslo.privsep.daemon [-] privsep daemon running as pid 232331#033[00m
Nov 25 04:54:06 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:06.804 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1b367b-3d5b-483b-9827-2b9bd8b38355]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:06.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:07 np0005534696 nova_compute[228704]: 2025-11-25 09:54:07.182 228708 DEBUG nova.compute.manager [req-c52d4955-7453-4082-b1ce-c28cb675d0ed req-b8bbc75f-f5e4-4510-8b23-5b898e772c6b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-vif-plugged-21840a13-6732-487c-8048-5f629bcfa4ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:07 np0005534696 nova_compute[228704]: 2025-11-25 09:54:07.182 228708 DEBUG oslo_concurrency.lockutils [req-c52d4955-7453-4082-b1ce-c28cb675d0ed req-b8bbc75f-f5e4-4510-8b23-5b898e772c6b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:07 np0005534696 nova_compute[228704]: 2025-11-25 09:54:07.182 228708 DEBUG oslo_concurrency.lockutils [req-c52d4955-7453-4082-b1ce-c28cb675d0ed req-b8bbc75f-f5e4-4510-8b23-5b898e772c6b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:07 np0005534696 nova_compute[228704]: 2025-11-25 09:54:07.183 228708 DEBUG oslo_concurrency.lockutils [req-c52d4955-7453-4082-b1ce-c28cb675d0ed req-b8bbc75f-f5e4-4510-8b23-5b898e772c6b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:07 np0005534696 nova_compute[228704]: 2025-11-25 09:54:07.183 228708 DEBUG nova.compute.manager [req-c52d4955-7453-4082-b1ce-c28cb675d0ed req-b8bbc75f-f5e4-4510-8b23-5b898e772c6b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] No waiting events found dispatching network-vif-plugged-21840a13-6732-487c-8048-5f629bcfa4ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:54:07 np0005534696 nova_compute[228704]: 2025-11-25 09:54:07.183 228708 WARNING nova.compute.manager [req-c52d4955-7453-4082-b1ce-c28cb675d0ed req-b8bbc75f-f5e4-4510-8b23-5b898e772c6b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received unexpected event network-vif-plugged-21840a13-6732-487c-8048-5f629bcfa4ff for instance with vm_state active and task_state None.#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.203 232331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.204 232331 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.204 232331 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:07 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9000054f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:07 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:54:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:07 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.676 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[1d56dd7e-18a8-4281-9b58-195a6aa4ca77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.691 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[00c3da9b-061f-455a-8410-7e876d382303]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:07 np0005534696 NetworkManager[48892]: <info>  [1764064447.6915] manager: (tapde5dde40-30): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Nov 25 04:54:07 np0005534696 systemd-udevd[232344]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.714 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[d69512c6-7e0f-40d2-a7f6-8a43536871d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.719 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[ff21d38d-d534-4ede-8955-2034f867eecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:07 np0005534696 NetworkManager[48892]: <info>  [1764064447.7387] device (tapde5dde40-30): carrier: link connected
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.742 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[5149a738-e7ae-4383-9b20-6053a47e6fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.761 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[60bcf491-abb4-4306-b28c-64d0d1e8dbcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde5dde40-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:23:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 322916, 'reachable_time': 40251, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232354, 'error': None, 'target': 'ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.773 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[1633db09-838e-46d2-9355-6366acf0b1ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:23f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 322916, 'tstamp': 322916}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 232356, 'error': None, 'target': 'ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.783 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[c0aaf0f4-fe47-48c9-80b7-fde5dcf4fa2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde5dde40-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:23:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 322916, 'reachable_time': 40251, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 232357, 'error': None, 'target': 'ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.802 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[ead83d7d-731b-40ef-8188-fe633b27c9b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:07.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.839 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[94df3eef-f96c-48c7-a7c3-c88c303d64dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.840 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde5dde40-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.841 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.841 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde5dde40-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:07 np0005534696 NetworkManager[48892]: <info>  [1764064447.8439] manager: (tapde5dde40-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 25 04:54:07 np0005534696 nova_compute[228704]: 2025-11-25 09:54:07.843 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:07 np0005534696 kernel: tapde5dde40-30: entered promiscuous mode
Nov 25 04:54:07 np0005534696 nova_compute[228704]: 2025-11-25 09:54:07.845 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.846 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde5dde40-30, col_values=(('external_ids', {'iface-id': 'ec0e3955-0ee4-4e17-a11c-940d6b690be5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:07 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:07Z|00031|binding|INFO|Releasing lport ec0e3955-0ee4-4e17-a11c-940d6b690be5 from this chassis (sb_readonly=0)
Nov 25 04:54:07 np0005534696 nova_compute[228704]: 2025-11-25 09:54:07.847 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:07 np0005534696 nova_compute[228704]: 2025-11-25 09:54:07.862 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.863 142676 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/de5dde40-3ef0-4c85-b48a-62ea2f4c04e7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/de5dde40-3ef0-4c85-b48a-62ea2f4c04e7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.864 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[9499cb7e-f376-4a5e-b08b-48d484caaa81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.865 142676 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: global
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    log         /dev/log local0 debug
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    log-tag     haproxy-metadata-proxy-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    user        root
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    group       root
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    maxconn     1024
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    pidfile     /var/lib/neutron/external/pids/de5dde40-3ef0-4c85-b48a-62ea2f4c04e7.pid.haproxy
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    daemon
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: defaults
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    log global
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    mode http
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    option httplog
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    option dontlognull
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    option http-server-close
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    option forwardfor
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    retries                 3
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    timeout http-request    30s
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    timeout connect         30s
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    timeout client          32s
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    timeout server          32s
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    timeout http-keep-alive 30s
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: listen listener
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    bind 169.254.169.254:80
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]:    http-request add-header X-OVN-Network-ID de5dde40-3ef0-4c85-b48a-62ea2f4c04e7
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:54:07 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:07.866 142676 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7', 'env', 'PROCESS_TAG=haproxy-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/de5dde40-3ef0-4c85-b48a-62ea2f4c04e7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:54:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:07 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904002f80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:07 np0005534696 nova_compute[228704]: 2025-11-25 09:54:07.978 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:08 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:08 np0005534696 podman[232386]: 2025-11-25 09:54:08.170484441 +0000 UTC m=+0.032927158 container create 93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:54:08 np0005534696 systemd[1]: Started libpod-conmon-93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a.scope.
Nov 25 04:54:08 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:54:08 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d40dc5f3db64b934bd0321967ccec695b0402d7f2247ac07c8fbec574237f1b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:54:08 np0005534696 podman[232386]: 2025-11-25 09:54:08.222610114 +0000 UTC m=+0.085052840 container init 93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:54:08 np0005534696 podman[232386]: 2025-11-25 09:54:08.227714262 +0000 UTC m=+0.090156978 container start 93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 04:54:08 np0005534696 podman[232386]: 2025-11-25 09:54:08.154877102 +0000 UTC m=+0.017319837 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:54:08 np0005534696 neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7[232397]: [NOTICE]   (232401) : New worker (232403) forked
Nov 25 04:54:08 np0005534696 neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7[232397]: [NOTICE]   (232401) : Loading success.
Nov 25 04:54:08 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:08Z|00032|binding|INFO|Releasing lport ec0e3955-0ee4-4e17-a11c-940d6b690be5 from this chassis (sb_readonly=0)
Nov 25 04:54:08 np0005534696 nova_compute[228704]: 2025-11-25 09:54:08.597 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:08 np0005534696 NetworkManager[48892]: <info>  [1764064448.5991] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Nov 25 04:54:08 np0005534696 NetworkManager[48892]: <info>  [1764064448.5996] device (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:54:08 np0005534696 NetworkManager[48892]: <info>  [1764064448.6005] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Nov 25 04:54:08 np0005534696 NetworkManager[48892]: <info>  [1764064448.6009] device (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 04:54:08 np0005534696 NetworkManager[48892]: <info>  [1764064448.6016] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 25 04:54:08 np0005534696 NetworkManager[48892]: <info>  [1764064448.6021] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 25 04:54:08 np0005534696 NetworkManager[48892]: <info>  [1764064448.6025] device (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 04:54:08 np0005534696 NetworkManager[48892]: <info>  [1764064448.6028] device (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 04:54:08 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:08Z|00033|binding|INFO|Releasing lport ec0e3955-0ee4-4e17-a11c-940d6b690be5 from this chassis (sb_readonly=0)
Nov 25 04:54:08 np0005534696 nova_compute[228704]: 2025-11-25 09:54:08.637 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:08 np0005534696 nova_compute[228704]: 2025-11-25 09:54:08.641 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:08 np0005534696 nova_compute[228704]: 2025-11-25 09:54:08.904 228708 DEBUG nova.compute.manager [req-fa07ee9b-9f2d-4a8b-89bd-4e8e4f4f118d req-4c970622-339d-44fd-a3ab-c6b367cb7e1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-changed-21840a13-6732-487c-8048-5f629bcfa4ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:08 np0005534696 nova_compute[228704]: 2025-11-25 09:54:08.904 228708 DEBUG nova.compute.manager [req-fa07ee9b-9f2d-4a8b-89bd-4e8e4f4f118d req-4c970622-339d-44fd-a3ab-c6b367cb7e1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Refreshing instance network info cache due to event network-changed-21840a13-6732-487c-8048-5f629bcfa4ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:54:08 np0005534696 nova_compute[228704]: 2025-11-25 09:54:08.904 228708 DEBUG oslo_concurrency.lockutils [req-fa07ee9b-9f2d-4a8b-89bd-4e8e4f4f118d req-4c970622-339d-44fd-a3ab-c6b367cb7e1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:54:08 np0005534696 nova_compute[228704]: 2025-11-25 09:54:08.905 228708 DEBUG oslo_concurrency.lockutils [req-fa07ee9b-9f2d-4a8b-89bd-4e8e4f4f118d req-4c970622-339d-44fd-a3ab-c6b367cb7e1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:54:08 np0005534696 nova_compute[228704]: 2025-11-25 09:54:08.905 228708 DEBUG nova.network.neutron [req-fa07ee9b-9f2d-4a8b-89bd-4e8e4f4f118d req-4c970622-339d-44fd-a3ab-c6b367cb7e1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Refreshing network info cache for port 21840a13-6732-487c-8048-5f629bcfa4ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:54:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:08.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:09 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:09.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:09 np0005534696 nova_compute[228704]: 2025-11-25 09:54:09.909 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:09 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900005670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:10 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900005670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:10 np0005534696 nova_compute[228704]: 2025-11-25 09:54:10.164 228708 DEBUG nova.network.neutron [req-fa07ee9b-9f2d-4a8b-89bd-4e8e4f4f118d req-4c970622-339d-44fd-a3ab-c6b367cb7e1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updated VIF entry in instance network info cache for port 21840a13-6732-487c-8048-5f629bcfa4ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:54:10 np0005534696 nova_compute[228704]: 2025-11-25 09:54:10.165 228708 DEBUG nova.network.neutron [req-fa07ee9b-9f2d-4a8b-89bd-4e8e4f4f118d req-4c970622-339d-44fd-a3ab-c6b367cb7e1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updating instance_info_cache with network_info: [{"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:54:10 np0005534696 nova_compute[228704]: 2025-11-25 09:54:10.178 228708 DEBUG oslo_concurrency.lockutils [req-fa07ee9b-9f2d-4a8b-89bd-4e8e4f4f118d req-4c970622-339d-44fd-a3ab-c6b367cb7e1a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:54:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:10 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:54:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:10.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:11 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:11.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:11 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:12 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900006b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:12.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:12 np0005534696 nova_compute[228704]: 2025-11-25 09:54:12.979 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:13 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900006b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:13.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:13 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:14 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:14 np0005534696 nova_compute[228704]: 2025-11-25 09:54:14.910 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:14.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:15 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900006b40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:15 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:15Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c5:3f:c2 10.100.0.7
Nov 25 04:54:15 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:15Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:3f:c2 10.100.0.7
Nov 25 04:54:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095415 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:54:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:54:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:15.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:54:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:15 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904004700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:16 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:16 np0005534696 nova_compute[228704]: 2025-11-25 09:54:16.371 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:54:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:16.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:17 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904004700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.375 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.375 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.375 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.376 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.376 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:54:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:54:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/737909963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.720 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.782 228708 DEBUG nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.782 228708 DEBUG nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 25 04:54:17 np0005534696 podman[232442]: 2025-11-25 09:54:17.798517226 +0000 UTC m=+0.047209862 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:54:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:17.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:17 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.980 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.996 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.997 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4765MB free_disk=59.94289016723633GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.997 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:17 np0005534696 nova_compute[228704]: 2025-11-25 09:54:17.998 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.060 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Instance ca79ca37-186d-411c-b60c-640a85d7c8a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.060 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.060 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:54:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:18 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.092 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:54:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:54:18 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3923914555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.443 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.448 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating inventory in ProviderTree for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.486 228708 ERROR nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] [req-0dad6c31-e04e-42d4-8580-46c892f41211] Failed to update inventory to [{'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID e8eea1e0-1833-4152-af65-8b442fac3e0d.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-0dad6c31-e04e-42d4-8580-46c892f41211"}]}#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.501 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing inventories for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.512 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating ProviderTree inventory for provider e8eea1e0-1833-4152-af65-8b442fac3e0d from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.513 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating inventory in ProviderTree for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.522 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing aggregate associations for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.538 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing trait associations for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d, traits: HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX512VAES,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.570 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.908 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:54:18 np0005534696 nova_compute[228704]: 2025-11-25 09:54:18.912 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating inventory in ProviderTree for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with inventory: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:54:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:54:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:18.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:54:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:19 np0005534696 nova_compute[228704]: 2025-11-25 09:54:19.153 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updated inventory for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7681, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 25 04:54:19 np0005534696 nova_compute[228704]: 2025-11-25 09:54:19.154 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 25 04:54:19 np0005534696 nova_compute[228704]: 2025-11-25 09:54:19.154 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating inventory in ProviderTree for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with inventory: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:54:19 np0005534696 nova_compute[228704]: 2025-11-25 09:54:19.188 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:54:19 np0005534696 nova_compute[228704]: 2025-11-25 09:54:19.189 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:19.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:19 np0005534696 nova_compute[228704]: 2025-11-25 09:54:19.911 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:20 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:20 np0005534696 nova_compute[228704]: 2025-11-25 09:54:20.188 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:54:20 np0005534696 nova_compute[228704]: 2025-11-25 09:54:20.189 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:54:20 np0005534696 nova_compute[228704]: 2025-11-25 09:54:20.189 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:54:20 np0005534696 nova_compute[228704]: 2025-11-25 09:54:20.189 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:54:20 np0005534696 nova_compute[228704]: 2025-11-25 09:54:20.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:54:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:20.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:21 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:21 np0005534696 nova_compute[228704]: 2025-11-25 09:54:21.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:54:21 np0005534696 nova_compute[228704]: 2025-11-25 09:54:21.415 228708 INFO nova.compute.manager [None req-4a81a8c2-4dab-48fe-9159-182187b0bc37 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Get console output#033[00m
Nov 25 04:54:21 np0005534696 nova_compute[228704]: 2025-11-25 09:54:21.419 228708 INFO oslo.privsep.daemon [None req-4a81a8c2-4dab-48fe-9159-182187b0bc37 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpv9btlott/privsep.sock']#033[00m
Nov 25 04:54:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:21.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:21 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:21 np0005534696 nova_compute[228704]: 2025-11-25 09:54:21.955 228708 INFO oslo.privsep.daemon [None req-4a81a8c2-4dab-48fe-9159-182187b0bc37 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 25 04:54:21 np0005534696 nova_compute[228704]: 2025-11-25 09:54:21.873 232536 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 25 04:54:21 np0005534696 nova_compute[228704]: 2025-11-25 09:54:21.877 232536 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 25 04:54:21 np0005534696 nova_compute[228704]: 2025-11-25 09:54:21.878 232536 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 25 04:54:21 np0005534696 nova_compute[228704]: 2025-11-25 09:54:21.878 232536 INFO oslo.privsep.daemon [-] privsep daemon running as pid 232536#033[00m
Nov 25 04:54:22 np0005534696 nova_compute[228704]: 2025-11-25 09:54:22.033 232536 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 04:54:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:22 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005410 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:22 np0005534696 nova_compute[228704]: 2025-11-25 09:54:22.352 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:54:22 np0005534696 nova_compute[228704]: 2025-11-25 09:54:22.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:54:22 np0005534696 nova_compute[228704]: 2025-11-25 09:54:22.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:54:22 np0005534696 nova_compute[228704]: 2025-11-25 09:54:22.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:54:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:22.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:22 np0005534696 nova_compute[228704]: 2025-11-25 09:54:22.981 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:23 np0005534696 nova_compute[228704]: 2025-11-25 09:54:23.197 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:54:23 np0005534696 nova_compute[228704]: 2025-11-25 09:54:23.197 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquired lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:54:23 np0005534696 nova_compute[228704]: 2025-11-25 09:54:23.198 228708 DEBUG nova.network.neutron [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 25 04:54:23 np0005534696 nova_compute[228704]: 2025-11-25 09:54:23.198 228708 DEBUG nova.objects.instance [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lazy-loading 'info_cache' on Instance uuid ca79ca37-186d-411c-b60c-640a85d7c8a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:54:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:23 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:54:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:23.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:54:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:23 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:24 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:24 np0005534696 nova_compute[228704]: 2025-11-25 09:54:24.239 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:24 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:24.240 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:54:24 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:24.241 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:54:24 np0005534696 nova_compute[228704]: 2025-11-25 09:54:24.912 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:24.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:25.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:26 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:26 np0005534696 nova_compute[228704]: 2025-11-25 09:54:26.517 228708 DEBUG nova.network.neutron [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updating instance_info_cache with network_info: [{"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:54:26 np0005534696 nova_compute[228704]: 2025-11-25 09:54:26.540 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Releasing lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:54:26 np0005534696 nova_compute[228704]: 2025-11-25 09:54:26.540 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 25 04:54:26 np0005534696 nova_compute[228704]: 2025-11-25 09:54:26.540 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:54:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:54:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:26.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:54:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:27 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:27 np0005534696 podman[232543]: 2025-11-25 09:54:27.344836266 +0000 UTC m=+0.057953766 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:54:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:54:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:27.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:54:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:27 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8ec006e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:27 np0005534696 nova_compute[228704]: 2025-11-25 09:54:27.982 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:28 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:28 np0005534696 nova_compute[228704]: 2025-11-25 09:54:28.553 228708 DEBUG oslo_concurrency.lockutils [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "interface-ca79ca37-186d-411c-b60c-640a85d7c8a0-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:28 np0005534696 nova_compute[228704]: 2025-11-25 09:54:28.554 228708 DEBUG oslo_concurrency.lockutils [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "interface-ca79ca37-186d-411c-b60c-640a85d7c8a0-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:28 np0005534696 nova_compute[228704]: 2025-11-25 09:54:28.554 228708 DEBUG nova.objects.instance [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'flavor' on Instance uuid ca79ca37-186d-411c-b60c-640a85d7c8a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:54:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:28.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:29 np0005534696 nova_compute[228704]: 2025-11-25 09:54:29.068 228708 DEBUG nova.objects.instance [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_requests' on Instance uuid ca79ca37-186d-411c-b60c-640a85d7c8a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:54:29 np0005534696 nova_compute[228704]: 2025-11-25 09:54:29.079 228708 DEBUG nova.network.neutron [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:54:29 np0005534696 nova_compute[228704]: 2025-11-25 09:54:29.186 228708 DEBUG nova.policy [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:54:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:29 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:29 np0005534696 nova_compute[228704]: 2025-11-25 09:54:29.563 228708 DEBUG nova.network.neutron [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Successfully created port: bce4add4-0500-49de-a844-e33f109cc5a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 04:54:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:29.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:29 np0005534696 nova_compute[228704]: 2025-11-25 09:54:29.915 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:29 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00d730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:30 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007460 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:54:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:30.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:54:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095431 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:54:31 np0005534696 nova_compute[228704]: 2025-11-25 09:54:31.312 228708 DEBUG nova.network.neutron [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Successfully updated port: bce4add4-0500-49de-a844-e33f109cc5a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:54:31 np0005534696 nova_compute[228704]: 2025-11-25 09:54:31.329 228708 DEBUG oslo_concurrency.lockutils [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:54:31 np0005534696 nova_compute[228704]: 2025-11-25 09:54:31.329 228708 DEBUG oslo_concurrency.lockutils [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:54:31 np0005534696 nova_compute[228704]: 2025-11-25 09:54:31.329 228708 DEBUG nova.network.neutron [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:54:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:31 np0005534696 nova_compute[228704]: 2025-11-25 09:54:31.425 228708 DEBUG nova.compute.manager [req-908b31c1-e798-4cf9-85f5-7d33f3b25215 req-7136d137-d3e3-4caf-8555-09786acf34e7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-changed-bce4add4-0500-49de-a844-e33f109cc5a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:31 np0005534696 nova_compute[228704]: 2025-11-25 09:54:31.425 228708 DEBUG nova.compute.manager [req-908b31c1-e798-4cf9-85f5-7d33f3b25215 req-7136d137-d3e3-4caf-8555-09786acf34e7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Refreshing instance network info cache due to event network-changed-bce4add4-0500-49de-a844-e33f109cc5a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:54:31 np0005534696 nova_compute[228704]: 2025-11-25 09:54:31.425 228708 DEBUG oslo_concurrency.lockutils [req-908b31c1-e798-4cf9-85f5-7d33f3b25215 req-7136d137-d3e3-4caf-8555-09786acf34e7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:54:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:54:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:31.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:54:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00de20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:32 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:32.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:32 np0005534696 nova_compute[228704]: 2025-11-25 09:54:32.983 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.242 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:33 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.375 228708 DEBUG nova.network.neutron [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updating instance_info_cache with network_info: [{"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bce4add4-0500-49de-a844-e33f109cc5a7", "address": "fa:16:3e:4a:98:7e", "network": {"id": "8e61eab8-1283-49fe-833d-4cfce4c0f212", "bridge": "br-int", "label": "tempest-network-smoke--925031511", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbce4add4-05", "ovs_interfaceid": "bce4add4-0500-49de-a844-e33f109cc5a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.389 228708 DEBUG oslo_concurrency.lockutils [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.390 228708 DEBUG oslo_concurrency.lockutils [req-908b31c1-e798-4cf9-85f5-7d33f3b25215 req-7136d137-d3e3-4caf-8555-09786acf34e7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.390 228708 DEBUG nova.network.neutron [req-908b31c1-e798-4cf9-85f5-7d33f3b25215 req-7136d137-d3e3-4caf-8555-09786acf34e7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Refreshing network info cache for port bce4add4-0500-49de-a844-e33f109cc5a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.392 228708 DEBUG nova.virt.libvirt.vif [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:53:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-390927783',display_name='tempest-TestNetworkBasicOps-server-390927783',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-390927783',id=3,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/LOSsuqS79AZmC6VwV2XH9CWUBaXJh1TZaGbQ6eYP2j6spxOrdg2cOeHSfAGVBq21aZfYvQ6caaSpZDGxI5QanjNZPSsJ3dPGHUybUeoJjsrYJPbKSgEEOXBfITkpdaw==',key_name='tempest-TestNetworkBasicOps-244155696',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:54:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-94zvllik',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:54:05Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=ca79ca37-186d-411c-b60c-640a85d7c8a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bce4add4-0500-49de-a844-e33f109cc5a7", "address": "fa:16:3e:4a:98:7e", "network": {"id": "8e61eab8-1283-49fe-833d-4cfce4c0f212", "bridge": "br-int", "label": "tempest-network-smoke--925031511", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbce4add4-05", "ovs_interfaceid": "bce4add4-0500-49de-a844-e33f109cc5a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.392 228708 DEBUG nova.network.os_vif_util [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "bce4add4-0500-49de-a844-e33f109cc5a7", "address": "fa:16:3e:4a:98:7e", "network": {"id": "8e61eab8-1283-49fe-833d-4cfce4c0f212", "bridge": "br-int", "label": "tempest-network-smoke--925031511", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbce4add4-05", "ovs_interfaceid": "bce4add4-0500-49de-a844-e33f109cc5a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.393 228708 DEBUG nova.network.os_vif_util [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:98:7e,bridge_name='br-int',has_traffic_filtering=True,id=bce4add4-0500-49de-a844-e33f109cc5a7,network=Network(8e61eab8-1283-49fe-833d-4cfce4c0f212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbce4add4-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.393 228708 DEBUG os_vif [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:98:7e,bridge_name='br-int',has_traffic_filtering=True,id=bce4add4-0500-49de-a844-e33f109cc5a7,network=Network(8e61eab8-1283-49fe-833d-4cfce4c0f212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbce4add4-05') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.394 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.394 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.394 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.396 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.396 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce4add4-05, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.396 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbce4add4-05, col_values=(('external_ids', {'iface-id': 'bce4add4-0500-49de-a844-e33f109cc5a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:98:7e', 'vm-uuid': 'ca79ca37-186d-411c-b60c-640a85d7c8a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:33 np0005534696 NetworkManager[48892]: <info>  [1764064473.3987] manager: (tapbce4add4-05): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.401 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.402 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.403 228708 INFO os_vif [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:98:7e,bridge_name='br-int',has_traffic_filtering=True,id=bce4add4-0500-49de-a844-e33f109cc5a7,network=Network(8e61eab8-1283-49fe-833d-4cfce4c0f212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbce4add4-05')#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.404 228708 DEBUG nova.virt.libvirt.vif [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:53:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-390927783',display_name='tempest-TestNetworkBasicOps-server-390927783',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-390927783',id=3,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/LOSsuqS79AZmC6VwV2XH9CWUBaXJh1TZaGbQ6eYP2j6spxOrdg2cOeHSfAGVBq21aZfYvQ6caaSpZDGxI5QanjNZPSsJ3dPGHUybUeoJjsrYJPbKSgEEOXBfITkpdaw==',key_name='tempest-TestNetworkBasicOps-244155696',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:54:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-94zvllik',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:54:05Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=ca79ca37-186d-411c-b60c-640a85d7c8a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bce4add4-0500-49de-a844-e33f109cc5a7", "address": "fa:16:3e:4a:98:7e", "network": {"id": "8e61eab8-1283-49fe-833d-4cfce4c0f212", "bridge": "br-int", "label": "tempest-network-smoke--925031511", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbce4add4-05", "ovs_interfaceid": "bce4add4-0500-49de-a844-e33f109cc5a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.404 228708 DEBUG nova.network.os_vif_util [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "bce4add4-0500-49de-a844-e33f109cc5a7", "address": "fa:16:3e:4a:98:7e", "network": {"id": "8e61eab8-1283-49fe-833d-4cfce4c0f212", "bridge": "br-int", "label": "tempest-network-smoke--925031511", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbce4add4-05", "ovs_interfaceid": "bce4add4-0500-49de-a844-e33f109cc5a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.404 228708 DEBUG nova.network.os_vif_util [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:98:7e,bridge_name='br-int',has_traffic_filtering=True,id=bce4add4-0500-49de-a844-e33f109cc5a7,network=Network(8e61eab8-1283-49fe-833d-4cfce4c0f212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbce4add4-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.406 228708 DEBUG nova.virt.libvirt.guest [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] attach device xml: <interface type="ethernet">
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <mac address="fa:16:3e:4a:98:7e"/>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <model type="virtio"/>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <mtu size="1442"/>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <target dev="tapbce4add4-05"/>
Nov 25 04:54:33 np0005534696 nova_compute[228704]: </interface>
Nov 25 04:54:33 np0005534696 nova_compute[228704]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 25 04:54:33 np0005534696 NetworkManager[48892]: <info>  [1764064473.4140] manager: (tapbce4add4-05): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 25 04:54:33 np0005534696 kernel: tapbce4add4-05: entered promiscuous mode
Nov 25 04:54:33 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:33Z|00034|binding|INFO|Claiming lport bce4add4-0500-49de-a844-e33f109cc5a7 for this chassis.
Nov 25 04:54:33 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:33Z|00035|binding|INFO|bce4add4-0500-49de-a844-e33f109cc5a7: Claiming fa:16:3e:4a:98:7e 10.100.0.29
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.418 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.431 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:98:7e 10.100.0.29'], port_security=['fa:16:3e:4a:98:7e 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': 'ca79ca37-186d-411c-b60c-640a85d7c8a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e61eab8-1283-49fe-833d-4cfce4c0f212', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77421187-f24b-4366-8c59-8fbcf4a8390c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76109362-7c24-42bb-adf6-065238c3432b, chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], logical_port=bce4add4-0500-49de-a844-e33f109cc5a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.432 142676 INFO neutron.agent.ovn.metadata.agent [-] Port bce4add4-0500-49de-a844-e33f109cc5a7 in datapath 8e61eab8-1283-49fe-833d-4cfce4c0f212 bound to our chassis#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.433 142676 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e61eab8-1283-49fe-833d-4cfce4c0f212#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.443 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[fbb5b2d1-6d25-4d38-9f20-f3cf802bd31e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.444 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e61eab8-11 in ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.450 232274 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e61eab8-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.450 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[dd33469c-0e27-4c63-a7d3-4bf5316e9b50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.450 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8ee851-b166-4f69-9125-0e95372a6fe5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.461 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:33Z|00036|binding|INFO|Setting lport bce4add4-0500-49de-a844-e33f109cc5a7 ovn-installed in OVS
Nov 25 04:54:33 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:33Z|00037|binding|INFO|Setting lport bce4add4-0500-49de-a844-e33f109cc5a7 up in Southbound
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.463 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.473 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad379b3-5523-490c-aa2d-c0c3d8c6706b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 systemd-udevd[232591]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.488 228708 DEBUG nova.virt.libvirt.driver [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.488 228708 DEBUG nova.virt.libvirt.driver [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.488 228708 DEBUG nova.virt.libvirt.driver [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:c5:3f:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.488 228708 DEBUG nova.virt.libvirt.driver [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:4a:98:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.487 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[25c69d32-30da-4b95-9486-3e9329163fcd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 NetworkManager[48892]: <info>  [1764064473.4970] device (tapbce4add4-05): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:54:33 np0005534696 NetworkManager[48892]: <info>  [1764064473.4976] device (tapbce4add4-05): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:54:33 np0005534696 podman[232579]: 2025-11-25 09:54:33.514222167 +0000 UTC m=+0.070677722 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.516 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[6915ab99-50f3-44e4-904e-1ae96b3fda39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.521 228708 DEBUG nova.virt.libvirt.guest [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <nova:name>tempest-TestNetworkBasicOps-server-390927783</nova:name>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <nova:creationTime>2025-11-25 09:54:33</nova:creationTime>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <nova:flavor name="m1.nano">
Nov 25 04:54:33 np0005534696 nova_compute[228704]:    <nova:memory>128</nova:memory>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:    <nova:disk>1</nova:disk>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:    <nova:swap>0</nova:swap>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:    <nova:vcpus>1</nova:vcpus>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  </nova:flavor>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <nova:owner>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:    <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:    <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  </nova:owner>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  <nova:ports>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:    <nova:port uuid="21840a13-6732-487c-8048-5f629bcfa4ff">
Nov 25 04:54:33 np0005534696 nova_compute[228704]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:    </nova:port>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:    <nova:port uuid="bce4add4-0500-49de-a844-e33f109cc5a7">
Nov 25 04:54:33 np0005534696 nova_compute[228704]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:    </nova:port>
Nov 25 04:54:33 np0005534696 nova_compute[228704]:  </nova:ports>
Nov 25 04:54:33 np0005534696 nova_compute[228704]: </nova:instance>
Nov 25 04:54:33 np0005534696 nova_compute[228704]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 04:54:33 np0005534696 systemd-udevd[232603]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:54:33 np0005534696 NetworkManager[48892]: <info>  [1764064473.5221] manager: (tap8e61eab8-10): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.519 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[62f3a0a8-7205-4c2f-9965-588495f15fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.540 228708 DEBUG oslo_concurrency.lockutils [None req-b43499ba-a991-45f0-ac90-70e73593c7b6 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "interface-ca79ca37-186d-411c-b60c-640a85d7c8a0-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.543 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[a46573cb-b358-431b-b510-212682fb8a2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.545 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebdb068-d564-4371-817f-01a227e26f3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 NetworkManager[48892]: <info>  [1764064473.5585] device (tap8e61eab8-10): carrier: link connected
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.561 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[ad617b9a-0100-454c-ae61-7e322cefa853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.574 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[538fb7b8-88b9-422d-8188-9b3444b4e754]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e61eab8-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:e1:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 325498, 'reachable_time': 38878, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232618, 'error': None, 'target': 'ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.585 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[29b6cb2a-fba4-4925-b5f3-6a600364d4ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:e122'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 325498, 'tstamp': 325498}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 232619, 'error': None, 'target': 'ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.596 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[1cfd2b25-e1c1-4c24-8489-64efa6c66190]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e61eab8-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:e1:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 325498, 'reachable_time': 38878, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 232620, 'error': None, 'target': 'ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.614 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[bdfd004f-12ad-40ba-a507-a40c2448c1d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.653 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc92ce4-622e-4825-983c-59b04a04d2da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.653 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e61eab8-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.654 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.654 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e61eab8-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:33 np0005534696 NetworkManager[48892]: <info>  [1764064473.6560] manager: (tap8e61eab8-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.655 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 kernel: tap8e61eab8-10: entered promiscuous mode
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.657 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.660 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e61eab8-10, col_values=(('external_ids', {'iface-id': '060080bb-132d-4ee2-81d4-d949cc41cc8d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.660 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:33Z|00038|binding|INFO|Releasing lport 060080bb-132d-4ee2-81d4-d949cc41cc8d from this chassis (sb_readonly=0)
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.661 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.661 142676 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e61eab8-1283-49fe-833d-4cfce4c0f212.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e61eab8-1283-49fe-833d-4cfce4c0f212.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.662 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0f60e5-3a5a-47d5-9679-bbb2a20b2a3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.663 142676 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: global
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    log         /dev/log local0 debug
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    log-tag     haproxy-metadata-proxy-8e61eab8-1283-49fe-833d-4cfce4c0f212
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    user        root
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    group       root
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    maxconn     1024
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    pidfile     /var/lib/neutron/external/pids/8e61eab8-1283-49fe-833d-4cfce4c0f212.pid.haproxy
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    daemon
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: defaults
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    log global
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    mode http
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    option httplog
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    option dontlognull
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    option http-server-close
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    option forwardfor
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    retries                 3
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    timeout http-request    30s
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    timeout connect         30s
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    timeout client          32s
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    timeout server          32s
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    timeout http-keep-alive 30s
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: listen listener
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    bind 169.254.169.254:80
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]:    http-request add-header X-OVN-Network-ID 8e61eab8-1283-49fe-833d-4cfce4c0f212
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:54:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:33.663 142676 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212', 'env', 'PROCESS_TAG=haproxy-8e61eab8-1283-49fe-833d-4cfce4c0f212', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e61eab8-1283-49fe-833d-4cfce4c0f212.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.674 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.710 228708 DEBUG nova.compute.manager [req-ba5680c3-e58d-4668-b5f7-a6709e2142d3 req-cd760133-37d5-4125-bc17-a5c66f78982a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-vif-plugged-bce4add4-0500-49de-a844-e33f109cc5a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.710 228708 DEBUG oslo_concurrency.lockutils [req-ba5680c3-e58d-4668-b5f7-a6709e2142d3 req-cd760133-37d5-4125-bc17-a5c66f78982a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.710 228708 DEBUG oslo_concurrency.lockutils [req-ba5680c3-e58d-4668-b5f7-a6709e2142d3 req-cd760133-37d5-4125-bc17-a5c66f78982a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.711 228708 DEBUG oslo_concurrency.lockutils [req-ba5680c3-e58d-4668-b5f7-a6709e2142d3 req-cd760133-37d5-4125-bc17-a5c66f78982a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.711 228708 DEBUG nova.compute.manager [req-ba5680c3-e58d-4668-b5f7-a6709e2142d3 req-cd760133-37d5-4125-bc17-a5c66f78982a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] No waiting events found dispatching network-vif-plugged-bce4add4-0500-49de-a844-e33f109cc5a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:54:33 np0005534696 nova_compute[228704]: 2025-11-25 09:54:33.711 228708 WARNING nova.compute.manager [req-ba5680c3-e58d-4668-b5f7-a6709e2142d3 req-cd760133-37d5-4125-bc17-a5c66f78982a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received unexpected event network-vif-plugged-bce4add4-0500-49de-a844-e33f109cc5a7 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:54:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:33.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:33 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:33 np0005534696 podman[232649]: 2025-11-25 09:54:33.943961296 +0000 UTC m=+0.033655089 container create 53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 04:54:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:33 np0005534696 systemd[1]: Started libpod-conmon-53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4.scope.
Nov 25 04:54:33 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:54:33 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f616fb92451e9eaeac08d15aec95770cd8e9e520c9e36a638c848598b515d117/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:54:34 np0005534696 podman[232649]: 2025-11-25 09:54:34.004334601 +0000 UTC m=+0.094028404 container init 53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:54:34 np0005534696 podman[232649]: 2025-11-25 09:54:34.008772934 +0000 UTC m=+0.098466737 container start 53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:54:34 np0005534696 podman[232649]: 2025-11-25 09:54:33.928787142 +0000 UTC m=+0.018480955 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:54:34 np0005534696 neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212[232662]: [NOTICE]   (232666) : New worker (232668) forked
Nov 25 04:54:34 np0005534696 neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212[232662]: [NOTICE]   (232666) : Loading success.
Nov 25 04:54:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:34 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:34 np0005534696 nova_compute[228704]: 2025-11-25 09:54:34.703 228708 DEBUG nova.network.neutron [req-908b31c1-e798-4cf9-85f5-7d33f3b25215 req-7136d137-d3e3-4caf-8555-09786acf34e7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updated VIF entry in instance network info cache for port bce4add4-0500-49de-a844-e33f109cc5a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:54:34 np0005534696 nova_compute[228704]: 2025-11-25 09:54:34.703 228708 DEBUG nova.network.neutron [req-908b31c1-e798-4cf9-85f5-7d33f3b25215 req-7136d137-d3e3-4caf-8555-09786acf34e7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updating instance_info_cache with network_info: [{"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "bce4add4-0500-49de-a844-e33f109cc5a7", "address": "fa:16:3e:4a:98:7e", "network": {"id": "8e61eab8-1283-49fe-833d-4cfce4c0f212", "bridge": "br-int", "label": "tempest-network-smoke--925031511", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbce4add4-05", "ovs_interfaceid": "bce4add4-0500-49de-a844-e33f109cc5a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:54:34 np0005534696 nova_compute[228704]: 2025-11-25 09:54:34.717 228708 DEBUG oslo_concurrency.lockutils [req-908b31c1-e798-4cf9-85f5-7d33f3b25215 req-7136d137-d3e3-4caf-8555-09786acf34e7 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:54:34 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:34Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:98:7e 10.100.0.29
Nov 25 04:54:34 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:34Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:98:7e 10.100.0.29
Nov 25 04:54:34 np0005534696 nova_compute[228704]: 2025-11-25 09:54:34.917 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:34.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:35 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.439 228708 DEBUG oslo_concurrency.lockutils [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "interface-ca79ca37-186d-411c-b60c-640a85d7c8a0-bce4add4-0500-49de-a844-e33f109cc5a7" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.439 228708 DEBUG oslo_concurrency.lockutils [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "interface-ca79ca37-186d-411c-b60c-640a85d7c8a0-bce4add4-0500-49de-a844-e33f109cc5a7" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.447 228708 DEBUG nova.objects.instance [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'flavor' on Instance uuid ca79ca37-186d-411c-b60c-640a85d7c8a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.457 228708 DEBUG nova.virt.libvirt.vif [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:53:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-390927783',display_name='tempest-TestNetworkBasicOps-server-390927783',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-390927783',id=3,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/LOSsuqS79AZmC6VwV2XH9CWUBaXJh1TZaGbQ6eYP2j6spxOrdg2cOeHSfAGVBq21aZfYvQ6caaSpZDGxI5QanjNZPSsJ3dPGHUybUeoJjsrYJPbKSgEEOXBfITkpdaw==',key_name='tempest-TestNetworkBasicOps-244155696',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:54:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-94zvllik',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:54:05Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=ca79ca37-186d-411c-b60c-640a85d7c8a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bce4add4-0500-49de-a844-e33f109cc5a7", "address": "fa:16:3e:4a:98:7e", "network": {"id": "8e61eab8-1283-49fe-833d-4cfce4c0f212", "bridge": "br-int", "label": "tempest-network-smoke--925031511", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbce4add4-05", "ovs_interfaceid": "bce4add4-0500-49de-a844-e33f109cc5a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.457 228708 DEBUG nova.network.os_vif_util [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "bce4add4-0500-49de-a844-e33f109cc5a7", "address": "fa:16:3e:4a:98:7e", "network": {"id": "8e61eab8-1283-49fe-833d-4cfce4c0f212", "bridge": "br-int", "label": "tempest-network-smoke--925031511", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbce4add4-05", "ovs_interfaceid": "bce4add4-0500-49de-a844-e33f109cc5a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.458 228708 DEBUG nova.network.os_vif_util [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:98:7e,bridge_name='br-int',has_traffic_filtering=True,id=bce4add4-0500-49de-a844-e33f109cc5a7,network=Network(8e61eab8-1283-49fe-833d-4cfce4c0f212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbce4add4-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.460 228708 DEBUG nova.virt.libvirt.guest [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4a:98:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbce4add4-05"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.461 228708 DEBUG nova.virt.libvirt.guest [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4a:98:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbce4add4-05"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.463 228708 DEBUG nova.virt.libvirt.driver [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Attempting to detach device tapbce4add4-05 from instance ca79ca37-186d-411c-b60c-640a85d7c8a0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.463 228708 DEBUG nova.virt.libvirt.guest [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] detach device xml: <interface type="ethernet">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <mac address="fa:16:3e:4a:98:7e"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <model type="virtio"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <mtu size="1442"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <target dev="tapbce4add4-05"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: </interface>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.466 228708 DEBUG nova.virt.libvirt.guest [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4a:98:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbce4add4-05"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.468 228708 DEBUG nova.virt.libvirt.guest [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4a:98:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbce4add4-05"/></interface>not found in domain: <domain type='kvm' id='1'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <name>instance-00000003</name>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <uuid>ca79ca37-186d-411c-b60c-640a85d7c8a0</uuid>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <metadata>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:name>tempest-TestNetworkBasicOps-server-390927783</nova:name>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:creationTime>2025-11-25 09:54:33</nova:creationTime>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:flavor name="m1.nano">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:memory>128</nova:memory>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:disk>1</nova:disk>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:swap>0</nova:swap>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:vcpus>1</nova:vcpus>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </nova:flavor>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:owner>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </nova:owner>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:ports>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:port uuid="21840a13-6732-487c-8048-5f629bcfa4ff">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </nova:port>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:port uuid="bce4add4-0500-49de-a844-e33f109cc5a7">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </nova:port>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </nova:ports>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: </nova:instance>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </metadata>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <memory unit='KiB'>131072</memory>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <vcpu placement='static'>1</vcpu>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <resource>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <partition>/machine</partition>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </resource>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <sysinfo type='smbios'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <system>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='manufacturer'>RDO</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='serial'>ca79ca37-186d-411c-b60c-640a85d7c8a0</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='uuid'>ca79ca37-186d-411c-b60c-640a85d7c8a0</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='family'>Virtual Machine</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </system>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </sysinfo>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <os>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <boot dev='hd'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <smbios mode='sysinfo'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </os>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <features>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <acpi/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <apic/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <vmcoreinfo state='on'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </features>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <model fallback='forbid'>EPYC-Milan</model>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <vendor>AMD</vendor>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='x2apic'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='hypervisor'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='vaes'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='vpclmulqdq'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='stibp'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='ssbd'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='overflow-recov'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='succor'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='lbrv'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='pause-filter'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='v-vmsave-vmload'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='vgif'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='svm'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='topoext'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='npt'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='nrip-save'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </cpu>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <clock offset='utc'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <timer name='hpet' present='no'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </clock>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <on_poweroff>destroy</on_poweroff>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <on_reboot>restart</on_reboot>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <on_crash>destroy</on_crash>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <devices>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <disk type='network' device='disk'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <auth username='openstack'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <secret type='ceph' uuid='af1c9ae3-08d7-5547-a53d-2cccf7c6ef90'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      </auth>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <source protocol='rbd' name='vms/ca79ca37-186d-411c-b60c-640a85d7c8a0_disk' index='2'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.100' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.102' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.101' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      </source>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target dev='vda' bus='virtio'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='virtio-disk0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <disk type='network' device='cdrom'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <auth username='openstack'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <secret type='ceph' uuid='af1c9ae3-08d7-5547-a53d-2cccf7c6ef90'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      </auth>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <source protocol='rbd' name='vms/ca79ca37-186d-411c-b60c-640a85d7c8a0_disk.config' index='1'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.100' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.102' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.101' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      </source>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target dev='sda' bus='sata'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <readonly/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='sata0-0-0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pcie.0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='1' port='0x10'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='2' port='0x11'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='3' port='0x12'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.3'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='4' port='0x13'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.4'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='5' port='0x14'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.5'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='6' port='0x15'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.6'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='7' port='0x16'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.7'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='8' port='0x17'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.8'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='9' port='0x18'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.9'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='10' port='0x19'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.10'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='11' port='0x1a'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.11'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='12' port='0x1b'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.12'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='13' port='0x1c'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.13'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='14' port='0x1d'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.14'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='15' port='0x1e'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.15'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='16' port='0x1f'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.16'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='17' port='0x20'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.17'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='18' port='0x21'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.18'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='19' port='0x22'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.19'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='20' port='0x23'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.20'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='21' port='0x24'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.21'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='22' port='0x25'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.22'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='23' port='0x26'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.23'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='24' port='0x27'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.24'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='25' port='0x28'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.25'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-pci-bridge'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.26'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='usb'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='sata' index='0'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='ide'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <interface type='ethernet'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <mac address='fa:16:3e:c5:3f:c2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target dev='tap21840a13-67'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model type='virtio'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <mtu size='1442'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='net0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </interface>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <interface type='ethernet'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <mac address='fa:16:3e:4a:98:7e'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target dev='tapbce4add4-05'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model type='virtio'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <mtu size='1442'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='net1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </interface>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <serial type='pty'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <source path='/dev/pts/0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <log file='/var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/console.log' append='off'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target type='isa-serial' port='0'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <model name='isa-serial'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      </target>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='serial0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </serial>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <source path='/dev/pts/0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <log file='/var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/console.log' append='off'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target type='serial' port='0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='serial0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </console>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <input type='tablet' bus='usb'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='input0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='usb' bus='0' port='1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </input>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <input type='mouse' bus='ps2'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='input1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </input>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <input type='keyboard' bus='ps2'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='input2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </input>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <listen type='address' address='::0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </graphics>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <audio id='1' type='none'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <video>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='video0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </video>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <watchdog model='itco' action='reset'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='watchdog0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </watchdog>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <memballoon model='virtio'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <stats period='10'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='balloon0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </memballoon>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <rng model='virtio'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <backend model='random'>/dev/urandom</backend>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='rng0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </rng>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </devices>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <label>system_u:system_r:svirt_t:s0:c954,c956</label>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c954,c956</imagelabel>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </seclabel>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <label>+107:+107</label>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <imagelabel>+107:+107</imagelabel>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </seclabel>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: </domain>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.469 228708 INFO nova.virt.libvirt.driver [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully detached device tapbce4add4-05 from instance ca79ca37-186d-411c-b60c-640a85d7c8a0 from the persistent domain config.#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.469 228708 DEBUG nova.virt.libvirt.driver [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] (1/8): Attempting to detach device tapbce4add4-05 with device alias net1 from instance ca79ca37-186d-411c-b60c-640a85d7c8a0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.470 228708 DEBUG nova.virt.libvirt.guest [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] detach device xml: <interface type="ethernet">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <mac address="fa:16:3e:4a:98:7e"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <model type="virtio"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <mtu size="1442"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <target dev="tapbce4add4-05"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: </interface>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 25 04:54:35 np0005534696 kernel: tapbce4add4-05 (unregistering): left promiscuous mode
Nov 25 04:54:35 np0005534696 NetworkManager[48892]: <info>  [1764064475.5613] device (tapbce4add4-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.563 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:35 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:35Z|00039|binding|INFO|Releasing lport bce4add4-0500-49de-a844-e33f109cc5a7 from this chassis (sb_readonly=0)
Nov 25 04:54:35 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:35Z|00040|binding|INFO|Setting lport bce4add4-0500-49de-a844-e33f109cc5a7 down in Southbound
Nov 25 04:54:35 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:35Z|00041|binding|INFO|Removing iface tapbce4add4-05 ovn-installed in OVS
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.565 228708 DEBUG nova.virt.libvirt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Received event <DeviceRemovedEvent: 1764064475.565032, ca79ca37-186d-411c-b60c-640a85d7c8a0 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.565 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.566 228708 DEBUG nova.virt.libvirt.driver [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Start waiting for the detach event from libvirt for device tapbce4add4-05 with device alias net1 for instance ca79ca37-186d-411c-b60c-640a85d7c8a0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.566 228708 DEBUG nova.virt.libvirt.guest [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4a:98:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbce4add4-05"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.568 228708 DEBUG nova.virt.libvirt.guest [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4a:98:7e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbce4add4-05"/></interface>not found in domain: <domain type='kvm' id='1'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <name>instance-00000003</name>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <uuid>ca79ca37-186d-411c-b60c-640a85d7c8a0</uuid>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <metadata>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:name>tempest-TestNetworkBasicOps-server-390927783</nova:name>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:creationTime>2025-11-25 09:54:33</nova:creationTime>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:flavor name="m1.nano">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:memory>128</nova:memory>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:disk>1</nova:disk>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:swap>0</nova:swap>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:vcpus>1</nova:vcpus>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </nova:flavor>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:owner>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </nova:owner>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:ports>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:port uuid="21840a13-6732-487c-8048-5f629bcfa4ff">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </nova:port>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:port uuid="bce4add4-0500-49de-a844-e33f109cc5a7">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </nova:port>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </nova:ports>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: </nova:instance>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </metadata>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <memory unit='KiB'>131072</memory>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <vcpu placement='static'>1</vcpu>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <resource>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <partition>/machine</partition>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </resource>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <sysinfo type='smbios'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <system>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='manufacturer'>RDO</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='product'>OpenStack Compute</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='serial'>ca79ca37-186d-411c-b60c-640a85d7c8a0</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='uuid'>ca79ca37-186d-411c-b60c-640a85d7c8a0</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <entry name='family'>Virtual Machine</entry>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </system>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </sysinfo>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <os>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <boot dev='hd'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <smbios mode='sysinfo'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </os>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <features>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <acpi/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <apic/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <vmcoreinfo state='on'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </features>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <cpu mode='custom' match='exact' check='full'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <model fallback='forbid'>EPYC-Milan</model>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <vendor>AMD</vendor>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='x2apic'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='tsc-deadline'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='hypervisor'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='tsc_adjust'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='vaes'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='vpclmulqdq'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='spec-ctrl'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='stibp'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='ssbd'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='cmp_legacy'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='overflow-recov'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='succor'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='virt-ssbd'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='lbrv'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='tsc-scale'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='vmcb-clean'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='flushbyasid'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='pause-filter'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='pfthreshold'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='v-vmsave-vmload'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='vgif'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='lfence-always-serializing'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='svm'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='require' name='topoext'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='npt'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='nrip-save'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <feature policy='disable' name='svme-addr-chk'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </cpu>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <clock offset='utc'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <timer name='pit' tickpolicy='delay'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <timer name='hpet' present='no'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </clock>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <on_poweroff>destroy</on_poweroff>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <on_reboot>restart</on_reboot>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <on_crash>destroy</on_crash>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <devices>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <disk type='network' device='disk'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <auth username='openstack'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <secret type='ceph' uuid='af1c9ae3-08d7-5547-a53d-2cccf7c6ef90'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      </auth>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <source protocol='rbd' name='vms/ca79ca37-186d-411c-b60c-640a85d7c8a0_disk' index='2'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.100' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.102' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.101' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      </source>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target dev='vda' bus='virtio'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='virtio-disk0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <disk type='network' device='cdrom'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <driver name='qemu' type='raw' cache='none'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <auth username='openstack'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <secret type='ceph' uuid='af1c9ae3-08d7-5547-a53d-2cccf7c6ef90'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      </auth>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <source protocol='rbd' name='vms/ca79ca37-186d-411c-b60c-640a85d7c8a0_disk.config' index='1'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.100' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.102' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <host name='192.168.122.101' port='6789'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      </source>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target dev='sda' bus='sata'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <readonly/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='sata0-0-0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='0' model='pcie-root'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pcie.0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='1' port='0x10'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='2' port='0x11'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='3' port='0x12'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.3'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='4' port='0x13'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.4'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='5' port='0x14'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.5'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='6' port='0x15'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.6'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='7' port='0x16'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.7'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='8' port='0x17'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.8'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='9' port='0x18'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.9'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='10' port='0x19'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.10'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='11' port='0x1a'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.11'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='12' port='0x1b'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.12'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='13' port='0x1c'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.13'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='14' port='0x1d'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.14'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='15' port='0x1e'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.15'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='16' port='0x1f'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.16'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='17' port='0x20'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.17'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='18' port='0x21'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.18'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='19' port='0x22'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.19'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='20' port='0x23'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.20'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='21' port='0x24'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.21'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='22' port='0x25'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.22'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='23' port='0x26'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.23'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='24' port='0x27'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.24'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-root-port'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target chassis='25' port='0x28'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.25'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model name='pcie-pci-bridge'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='pci.26'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='usb'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <controller type='sata' index='0'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='ide'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </controller>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <interface type='ethernet'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <mac address='fa:16:3e:c5:3f:c2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target dev='tap21840a13-67'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model type='virtio'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <driver name='vhost' rx_queue_size='512'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <mtu size='1442'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='net0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </interface>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <serial type='pty'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <source path='/dev/pts/0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <log file='/var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/console.log' append='off'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target type='isa-serial' port='0'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:        <model name='isa-serial'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      </target>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='serial0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </serial>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <console type='pty' tty='/dev/pts/0'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <source path='/dev/pts/0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <log file='/var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0/console.log' append='off'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <target type='serial' port='0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='serial0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </console>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <input type='tablet' bus='usb'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='input0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='usb' bus='0' port='1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </input>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <input type='mouse' bus='ps2'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='input1'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </input>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <input type='keyboard' bus='ps2'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='input2'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </input>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <listen type='address' address='::0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </graphics>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <audio id='1' type='none'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <video>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <model type='virtio' heads='1' primary='yes'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='video0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </video>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <watchdog model='itco' action='reset'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='watchdog0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </watchdog>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <memballoon model='virtio'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <stats period='10'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='balloon0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </memballoon>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <rng model='virtio'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <backend model='random'>/dev/urandom</backend>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <alias name='rng0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </rng>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </devices>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <label>system_u:system_r:svirt_t:s0:c954,c956</label>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c954,c956</imagelabel>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </seclabel>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <label>+107:+107</label>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <imagelabel>+107:+107</imagelabel>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </seclabel>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: </domain>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.569 228708 INFO nova.virt.libvirt.driver [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully detached device tapbce4add4-05 from instance ca79ca37-186d-411c-b60c-640a85d7c8a0 from the live domain config.#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.569 228708 DEBUG nova.virt.libvirt.vif [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:53:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-390927783',display_name='tempest-TestNetworkBasicOps-server-390927783',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-390927783',id=3,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/LOSsuqS79AZmC6VwV2XH9CWUBaXJh1TZaGbQ6eYP2j6spxOrdg2cOeHSfAGVBq21aZfYvQ6caaSpZDGxI5QanjNZPSsJ3dPGHUybUeoJjsrYJPbKSgEEOXBfITkpdaw==',key_name='tempest-TestNetworkBasicOps-244155696',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:54:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-94zvllik',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:54:05Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=ca79ca37-186d-411c-b60c-640a85d7c8a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bce4add4-0500-49de-a844-e33f109cc5a7", "address": "fa:16:3e:4a:98:7e", "network": {"id": "8e61eab8-1283-49fe-833d-4cfce4c0f212", "bridge": "br-int", "label": "tempest-network-smoke--925031511", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbce4add4-05", "ovs_interfaceid": "bce4add4-0500-49de-a844-e33f109cc5a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.569 228708 DEBUG nova.network.os_vif_util [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "bce4add4-0500-49de-a844-e33f109cc5a7", "address": "fa:16:3e:4a:98:7e", "network": {"id": "8e61eab8-1283-49fe-833d-4cfce4c0f212", "bridge": "br-int", "label": "tempest-network-smoke--925031511", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbce4add4-05", "ovs_interfaceid": "bce4add4-0500-49de-a844-e33f109cc5a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.570 228708 DEBUG nova.network.os_vif_util [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:98:7e,bridge_name='br-int',has_traffic_filtering=True,id=bce4add4-0500-49de-a844-e33f109cc5a7,network=Network(8e61eab8-1283-49fe-833d-4cfce4c0f212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbce4add4-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.570 228708 DEBUG os_vif [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:98:7e,bridge_name='br-int',has_traffic_filtering=True,id=bce4add4-0500-49de-a844-e33f109cc5a7,network=Network(8e61eab8-1283-49fe-833d-4cfce4c0f212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbce4add4-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.572 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.572 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce4add4-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.573 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:98:7e 10.100.0.29'], port_security=['fa:16:3e:4a:98:7e 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': 'ca79ca37-186d-411c-b60c-640a85d7c8a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e61eab8-1283-49fe-833d-4cfce4c0f212', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77421187-f24b-4366-8c59-8fbcf4a8390c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=76109362-7c24-42bb-adf6-065238c3432b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], logical_port=bce4add4-0500-49de-a844-e33f109cc5a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.574 142676 INFO neutron.agent.ovn.metadata.agent [-] Port bce4add4-0500-49de-a844-e33f109cc5a7 in datapath 8e61eab8-1283-49fe-833d-4cfce4c0f212 unbound from our chassis#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.575 142676 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e61eab8-1283-49fe-833d-4cfce4c0f212, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.576 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9dd53a-fec7-4674-b8bf-be2ec3af6ed5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.576 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.577 142676 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212 namespace which is not needed anymore#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.582 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.589 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.591 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.593 228708 INFO os_vif [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:98:7e,bridge_name='br-int',has_traffic_filtering=True,id=bce4add4-0500-49de-a844-e33f109cc5a7,network=Network(8e61eab8-1283-49fe-833d-4cfce4c0f212),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbce4add4-05')#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.593 228708 DEBUG nova.virt.libvirt.guest [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:name>tempest-TestNetworkBasicOps-server-390927783</nova:name>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:creationTime>2025-11-25 09:54:35</nova:creationTime>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:flavor name="m1.nano">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:memory>128</nova:memory>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:disk>1</nova:disk>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:swap>0</nova:swap>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:vcpus>1</nova:vcpus>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </nova:flavor>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:owner>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </nova:owner>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  <nova:ports>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    <nova:port uuid="21840a13-6732-487c-8048-5f629bcfa4ff">
Nov 25 04:54:35 np0005534696 nova_compute[228704]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:    </nova:port>
Nov 25 04:54:35 np0005534696 nova_compute[228704]:  </nova:ports>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: </nova:instance>
Nov 25 04:54:35 np0005534696 nova_compute[228704]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 25 04:54:35 np0005534696 neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212[232662]: [NOTICE]   (232666) : haproxy version is 2.8.14-c23fe91
Nov 25 04:54:35 np0005534696 neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212[232662]: [NOTICE]   (232666) : path to executable is /usr/sbin/haproxy
Nov 25 04:54:35 np0005534696 neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212[232662]: [WARNING]  (232666) : Exiting Master process...
Nov 25 04:54:35 np0005534696 neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212[232662]: [ALERT]    (232666) : Current worker (232668) exited with code 143 (Terminated)
Nov 25 04:54:35 np0005534696 neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212[232662]: [WARNING]  (232666) : All workers exited. Exiting... (0)
Nov 25 04:54:35 np0005534696 systemd[1]: libpod-53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4.scope: Deactivated successfully.
Nov 25 04:54:35 np0005534696 podman[232693]: 2025-11-25 09:54:35.67276479 +0000 UTC m=+0.031012850 container died 53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:54:35 np0005534696 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4-userdata-shm.mount: Deactivated successfully.
Nov 25 04:54:35 np0005534696 systemd[1]: var-lib-containers-storage-overlay-f616fb92451e9eaeac08d15aec95770cd8e9e520c9e36a638c848598b515d117-merged.mount: Deactivated successfully.
Nov 25 04:54:35 np0005534696 podman[232693]: 2025-11-25 09:54:35.694896683 +0000 UTC m=+0.053144745 container cleanup 53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 04:54:35 np0005534696 systemd[1]: libpod-conmon-53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4.scope: Deactivated successfully.
Nov 25 04:54:35 np0005534696 podman[232715]: 2025-11-25 09:54:35.732579962 +0000 UTC m=+0.022681430 container remove 53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.736 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[e663b0fc-ddd3-413e-805c-6fbb78f995a9]: (4, ('Tue Nov 25 09:54:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212 (53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4)\n53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4\nTue Nov 25 09:54:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212 (53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4)\n53d8a4c14b8d027c099db812c2458da84dc7cc24fd42295c286957a90c6120c4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.737 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0cb553-2f5f-433d-9868-cf4aa8c186cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.738 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e61eab8-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.739 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:35 np0005534696 kernel: tap8e61eab8-10: left promiscuous mode
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.754 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.756 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[59f1ce6f-e233-4065-98aa-59e39fdc5957]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.765 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[faab6a32-b3cd-4521-bf0e-a02835c1e402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.766 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[5b398cb9-4713-4675-9614-6a1a4bb4c5a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.777 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[9e44d587-009f-4128-9507-69ae74f4c885]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 325493, 'reachable_time': 31397, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232730, 'error': None, 'target': 'ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.784 142787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e61eab8-1283-49fe-833d-4cfce4c0f212 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:54:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:35.785 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[e46febe2-8e8d-4e04-8e7d-776acf053ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:35 np0005534696 systemd[1]: run-netns-ovnmeta\x2d8e61eab8\x2d1283\x2d49fe\x2d833d\x2d4cfce4c0f212.mount: Deactivated successfully.
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.822 228708 DEBUG nova.compute.manager [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-vif-plugged-bce4add4-0500-49de-a844-e33f109cc5a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.822 228708 DEBUG oslo_concurrency.lockutils [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.822 228708 DEBUG oslo_concurrency.lockutils [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.822 228708 DEBUG oslo_concurrency.lockutils [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.823 228708 DEBUG nova.compute.manager [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] No waiting events found dispatching network-vif-plugged-bce4add4-0500-49de-a844-e33f109cc5a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.823 228708 WARNING nova.compute.manager [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received unexpected event network-vif-plugged-bce4add4-0500-49de-a844-e33f109cc5a7 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.823 228708 DEBUG nova.compute.manager [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-vif-unplugged-bce4add4-0500-49de-a844-e33f109cc5a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.823 228708 DEBUG oslo_concurrency.lockutils [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.824 228708 DEBUG oslo_concurrency.lockutils [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.824 228708 DEBUG oslo_concurrency.lockutils [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.824 228708 DEBUG nova.compute.manager [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] No waiting events found dispatching network-vif-unplugged-bce4add4-0500-49de-a844-e33f109cc5a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:54:35 np0005534696 nova_compute[228704]: 2025-11-25 09:54:35.824 228708 WARNING nova.compute.manager [req-baaef0fc-02ff-4bc1-982d-ad0e243be6ec req-8afad6c6-54e1-4d44-8d82-b0e4a651c7ca c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received unexpected event network-vif-unplugged-bce4add4-0500-49de-a844-e33f109cc5a7 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:54:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:35.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:35 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:36 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:36 np0005534696 nova_compute[228704]: 2025-11-25 09:54:36.407 228708 DEBUG oslo_concurrency.lockutils [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:54:36 np0005534696 nova_compute[228704]: 2025-11-25 09:54:36.407 228708 DEBUG oslo_concurrency.lockutils [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:54:36 np0005534696 nova_compute[228704]: 2025-11-25 09:54:36.407 228708 DEBUG nova.network.neutron [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:54:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:36.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:37 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:37Z|00042|binding|INFO|Releasing lport ec0e3955-0ee4-4e17-a11c-940d6b690be5 from this chassis (sb_readonly=0)
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.374 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.498 228708 INFO nova.network.neutron [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Port bce4add4-0500-49de-a844-e33f109cc5a7 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.498 228708 DEBUG nova.network.neutron [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updating instance_info_cache with network_info: [{"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.517 228708 DEBUG oslo_concurrency.lockutils [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.535 228708 DEBUG oslo_concurrency.lockutils [None req-20fea097-9997-41eb-86bc-d26578f00b4e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "interface-ca79ca37-186d-411c-b60c-640a85d7c8a0-bce4add4-0500-49de-a844-e33f109cc5a7" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.826 228708 DEBUG nova.compute.manager [req-8aa9c25a-351f-4e67-9f8d-91f63b70e86f req-51ddac77-a1f2-407e-bcbd-5274897dc2fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-changed-21840a13-6732-487c-8048-5f629bcfa4ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.826 228708 DEBUG nova.compute.manager [req-8aa9c25a-351f-4e67-9f8d-91f63b70e86f req-51ddac77-a1f2-407e-bcbd-5274897dc2fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Refreshing instance network info cache due to event network-changed-21840a13-6732-487c-8048-5f629bcfa4ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.827 228708 DEBUG oslo_concurrency.lockutils [req-8aa9c25a-351f-4e67-9f8d-91f63b70e86f req-51ddac77-a1f2-407e-bcbd-5274897dc2fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.827 228708 DEBUG oslo_concurrency.lockutils [req-8aa9c25a-351f-4e67-9f8d-91f63b70e86f req-51ddac77-a1f2-407e-bcbd-5274897dc2fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.827 228708 DEBUG nova.network.neutron [req-8aa9c25a-351f-4e67-9f8d-91f63b70e86f req-51ddac77-a1f2-407e-bcbd-5274897dc2fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Refreshing network info cache for port 21840a13-6732-487c-8048-5f629bcfa4ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:54:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:37.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.898 228708 DEBUG nova.compute.manager [req-d7fdea01-85fb-4a4c-9b0b-5979442e776c req-8b7bf675-a397-4ecf-8a62-3731c99bfe9a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-vif-plugged-bce4add4-0500-49de-a844-e33f109cc5a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.899 228708 DEBUG oslo_concurrency.lockutils [req-d7fdea01-85fb-4a4c-9b0b-5979442e776c req-8b7bf675-a397-4ecf-8a62-3731c99bfe9a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.899 228708 DEBUG oslo_concurrency.lockutils [req-d7fdea01-85fb-4a4c-9b0b-5979442e776c req-8b7bf675-a397-4ecf-8a62-3731c99bfe9a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.899 228708 DEBUG oslo_concurrency.lockutils [req-d7fdea01-85fb-4a4c-9b0b-5979442e776c req-8b7bf675-a397-4ecf-8a62-3731c99bfe9a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.899 228708 DEBUG nova.compute.manager [req-d7fdea01-85fb-4a4c-9b0b-5979442e776c req-8b7bf675-a397-4ecf-8a62-3731c99bfe9a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] No waiting events found dispatching network-vif-plugged-bce4add4-0500-49de-a844-e33f109cc5a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.900 228708 WARNING nova.compute.manager [req-d7fdea01-85fb-4a4c-9b0b-5979442e776c req-8b7bf675-a397-4ecf-8a62-3731c99bfe9a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received unexpected event network-vif-plugged-bce4add4-0500-49de-a844-e33f109cc5a7 for instance with vm_state active and task_state deleting.#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.900 228708 DEBUG nova.compute.manager [req-d7fdea01-85fb-4a4c-9b0b-5979442e776c req-8b7bf675-a397-4ecf-8a62-3731c99bfe9a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-vif-deleted-bce4add4-0500-49de-a844-e33f109cc5a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.901 228708 DEBUG oslo_concurrency.lockutils [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.901 228708 DEBUG oslo_concurrency.lockutils [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.901 228708 DEBUG oslo_concurrency.lockutils [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.901 228708 DEBUG oslo_concurrency.lockutils [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.901 228708 DEBUG oslo_concurrency.lockutils [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.902 228708 INFO nova.compute.manager [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Terminating instance#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.903 228708 DEBUG nova.compute.manager [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:54:37 np0005534696 kernel: tap21840a13-67 (unregistering): left promiscuous mode
Nov 25 04:54:37 np0005534696 NetworkManager[48892]: <info>  [1764064477.9369] device (tap21840a13-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.942 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:37 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:37Z|00043|binding|INFO|Releasing lport 21840a13-6732-487c-8048-5f629bcfa4ff from this chassis (sb_readonly=0)
Nov 25 04:54:37 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:37Z|00044|binding|INFO|Setting lport 21840a13-6732-487c-8048-5f629bcfa4ff down in Southbound
Nov 25 04:54:37 np0005534696 ovn_controller[133535]: 2025-11-25T09:54:37Z|00045|binding|INFO|Removing iface tap21840a13-67 ovn-installed in OVS
Nov 25 04:54:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.945 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:37 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:37.952 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:3f:c2 10.100.0.7'], port_security=['fa:16:3e:c5:3f:c2 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ca79ca37-186d-411c-b60c-640a85d7c8a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f229e11a-d1eb-4c26-8d60-021e4739f1f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2598d6f-b466-4648-90e5-665655c38fd2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], logical_port=21840a13-6732-487c-8048-5f629bcfa4ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:54:37 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:37.953 142676 INFO neutron.agent.ovn.metadata.agent [-] Port 21840a13-6732-487c-8048-5f629bcfa4ff in datapath de5dde40-3ef0-4c85-b48a-62ea2f4c04e7 unbound from our chassis#033[00m
Nov 25 04:54:37 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:37.954 142676 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de5dde40-3ef0-4c85-b48a-62ea2f4c04e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:54:37 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:37.955 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c552d8-83b0-4ad8-adc0-dfe3b106b3a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:37 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:37.955 142676 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7 namespace which is not needed anymore#033[00m
Nov 25 04:54:37 np0005534696 nova_compute[228704]: 2025-11-25 09:54:37.965 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:37 np0005534696 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 25 04:54:37 np0005534696 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000003.scope: Consumed 11.367s CPU time.
Nov 25 04:54:37 np0005534696 systemd-machined[192760]: Machine qemu-1-instance-00000003 terminated.
Nov 25 04:54:38 np0005534696 neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7[232397]: [NOTICE]   (232401) : haproxy version is 2.8.14-c23fe91
Nov 25 04:54:38 np0005534696 neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7[232397]: [NOTICE]   (232401) : path to executable is /usr/sbin/haproxy
Nov 25 04:54:38 np0005534696 neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7[232397]: [ALERT]    (232401) : Current worker (232403) exited with code 143 (Terminated)
Nov 25 04:54:38 np0005534696 neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7[232397]: [WARNING]  (232401) : All workers exited. Exiting... (0)
Nov 25 04:54:38 np0005534696 systemd[1]: libpod-93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a.scope: Deactivated successfully.
Nov 25 04:54:38 np0005534696 podman[232757]: 2025-11-25 09:54:38.051142227 +0000 UTC m=+0.033075867 container died 93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 04:54:38 np0005534696 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a-userdata-shm.mount: Deactivated successfully.
Nov 25 04:54:38 np0005534696 systemd[1]: var-lib-containers-storage-overlay-9d40dc5f3db64b934bd0321967ccec695b0402d7f2247ac07c8fbec574237f1b-merged.mount: Deactivated successfully.
Nov 25 04:54:38 np0005534696 podman[232757]: 2025-11-25 09:54:38.073335207 +0000 UTC m=+0.055268847 container cleanup 93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:54:38 np0005534696 systemd[1]: libpod-conmon-93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a.scope: Deactivated successfully.
Nov 25 04:54:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:38 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:38 np0005534696 podman[232780]: 2025-11-25 09:54:38.111219884 +0000 UTC m=+0.022780417 container remove 93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.116 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:38 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:38.116 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[5adba70d-38dd-4e47-8fd4-529d87a5d580]: (4, ('Tue Nov 25 09:54:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7 (93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a)\n93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a\nTue Nov 25 09:54:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7 (93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a)\n93779af7139dd7a4001bf072a25caca162c98d0524b38f2f465a20f9506ee86a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:38 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:38.118 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea3995f-594c-4c9f-b5ca-03a0ef53e4a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:38 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:38.119 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde5dde40-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.119 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.120 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:38 np0005534696 kernel: tapde5dde40-30: left promiscuous mode
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.124 228708 INFO nova.virt.libvirt.driver [-] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Instance destroyed successfully.#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.124 228708 DEBUG nova.objects.instance [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid ca79ca37-186d-411c-b60c-640a85d7c8a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:54:38 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:38.138 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[b87f98c1-fbae-4483-9452-31f274403880]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.136 228708 DEBUG nova.virt.libvirt.vif [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:53:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-390927783',display_name='tempest-TestNetworkBasicOps-server-390927783',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-390927783',id=3,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE/LOSsuqS79AZmC6VwV2XH9CWUBaXJh1TZaGbQ6eYP2j6spxOrdg2cOeHSfAGVBq21aZfYvQ6caaSpZDGxI5QanjNZPSsJ3dPGHUybUeoJjsrYJPbKSgEEOXBfITkpdaw==',key_name='tempest-TestNetworkBasicOps-244155696',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:54:05Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-94zvllik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:54:05Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=ca79ca37-186d-411c-b60c-640a85d7c8a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.136 228708 DEBUG nova.network.os_vif_util [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.137 228708 DEBUG nova.network.os_vif_util [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c5:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=21840a13-6732-487c-8048-5f629bcfa4ff,network=Network(de5dde40-3ef0-4c85-b48a-62ea2f4c04e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21840a13-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.137 228708 DEBUG os_vif [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c5:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=21840a13-6732-487c-8048-5f629bcfa4ff,network=Network(de5dde40-3ef0-4c85-b48a-62ea2f4c04e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21840a13-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.138 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.138 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21840a13-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.139 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.140 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.142 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.144 228708 INFO os_vif [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c5:3f:c2,bridge_name='br-int',has_traffic_filtering=True,id=21840a13-6732-487c-8048-5f629bcfa4ff,network=Network(de5dde40-3ef0-4c85-b48a-62ea2f4c04e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21840a13-67')#033[00m
Nov 25 04:54:38 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:38.145 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[e1423086-c2cc-4eb2-96f1-69e581a557f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:38 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:38.146 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[209d7756-5041-4238-a003-c92844568159]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:38 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:38.156 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[514030ee-472f-4c4f-b352-087488c13e81]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 322909, 'reachable_time': 35178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 232810, 'error': None, 'target': 'ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:38 np0005534696 systemd[1]: run-netns-ovnmeta\x2dde5dde40\x2d3ef0\x2d4c85\x2db48a\x2d62ea2f4c04e7.mount: Deactivated successfully.
Nov 25 04:54:38 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:38.160 142787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-de5dde40-3ef0-4c85-b48a-62ea2f4c04e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:54:38 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:54:38.160 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b80eb5-f9bd-4cdc-b829-2a5d77d4733c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.294 228708 INFO nova.virt.libvirt.driver [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Deleting instance files /var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0_del#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.294 228708 INFO nova.virt.libvirt.driver [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Deletion of /var/lib/nova/instances/ca79ca37-186d-411c-b60c-640a85d7c8a0_del complete#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.334 228708 DEBUG nova.virt.libvirt.host [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.335 228708 INFO nova.virt.libvirt.host [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] UEFI support detected#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.335 228708 INFO nova.compute.manager [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Took 0.43 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.336 228708 DEBUG oslo.service.loopingcall [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.336 228708 DEBUG nova.compute.manager [-] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:54:38 np0005534696 nova_compute[228704]: 2025-11-25 09:54:38.336 228708 DEBUG nova.network.neutron [-] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:54:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.279 228708 DEBUG nova.network.neutron [-] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.289 228708 INFO nova.compute.manager [-] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Took 0.95 seconds to deallocate network for instance.#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.319 228708 DEBUG oslo_concurrency.lockutils [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.319 228708 DEBUG oslo_concurrency.lockutils [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.325 228708 DEBUG nova.network.neutron [req-8aa9c25a-351f-4e67-9f8d-91f63b70e86f req-51ddac77-a1f2-407e-bcbd-5274897dc2fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updated VIF entry in instance network info cache for port 21840a13-6732-487c-8048-5f629bcfa4ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.325 228708 DEBUG nova.network.neutron [req-8aa9c25a-351f-4e67-9f8d-91f63b70e86f req-51ddac77-a1f2-407e-bcbd-5274897dc2fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Updating instance_info_cache with network_info: [{"id": "21840a13-6732-487c-8048-5f629bcfa4ff", "address": "fa:16:3e:c5:3f:c2", "network": {"id": "de5dde40-3ef0-4c85-b48a-62ea2f4c04e7", "bridge": "br-int", "label": "tempest-network-smoke--1711553026", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21840a13-67", "ovs_interfaceid": "21840a13-6732-487c-8048-5f629bcfa4ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:54:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.342 228708 DEBUG oslo_concurrency.lockutils [req-8aa9c25a-351f-4e67-9f8d-91f63b70e86f req-51ddac77-a1f2-407e-bcbd-5274897dc2fa c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-ca79ca37-186d-411c-b60c-640a85d7c8a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.360 228708 DEBUG oslo_concurrency.processutils [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:54:39 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:54:39 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4111200644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.716 228708 DEBUG oslo_concurrency.processutils [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.720 228708 DEBUG nova.compute.provider_tree [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.732 228708 DEBUG nova.scheduler.client.report [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.745 228708 DEBUG oslo_concurrency.lockutils [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.764 228708 INFO nova.scheduler.client.report [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance ca79ca37-186d-411c-b60c-640a85d7c8a0#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.807 228708 DEBUG oslo_concurrency.lockutils [None req-829dfe36-819b-4d04-acc1-3d243ff74127 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:54:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:39.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.919 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.959 228708 DEBUG nova.compute.manager [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-vif-unplugged-21840a13-6732-487c-8048-5f629bcfa4ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.959 228708 DEBUG oslo_concurrency.lockutils [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.959 228708 DEBUG oslo_concurrency.lockutils [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.959 228708 DEBUG oslo_concurrency.lockutils [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.959 228708 DEBUG nova.compute.manager [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] No waiting events found dispatching network-vif-unplugged-21840a13-6732-487c-8048-5f629bcfa4ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.960 228708 WARNING nova.compute.manager [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received unexpected event network-vif-unplugged-21840a13-6732-487c-8048-5f629bcfa4ff for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.960 228708 DEBUG nova.compute.manager [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-vif-plugged-21840a13-6732-487c-8048-5f629bcfa4ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.960 228708 DEBUG oslo_concurrency.lockutils [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.960 228708 DEBUG oslo_concurrency.lockutils [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.960 228708 DEBUG oslo_concurrency.lockutils [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "ca79ca37-186d-411c-b60c-640a85d7c8a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.961 228708 DEBUG nova.compute.manager [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] No waiting events found dispatching network-vif-plugged-21840a13-6732-487c-8048-5f629bcfa4ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.961 228708 WARNING nova.compute.manager [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received unexpected event network-vif-plugged-21840a13-6732-487c-8048-5f629bcfa4ff for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.961 228708 DEBUG nova.compute.manager [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Received event network-vif-deleted-21840a13-6732-487c-8048-5f629bcfa4ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.961 228708 INFO nova.compute.manager [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Neutron deleted interface 21840a13-6732-487c-8048-5f629bcfa4ff; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.961 228708 DEBUG nova.network.neutron [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 25 04:54:39 np0005534696 nova_compute[228704]: 2025-11-25 09:54:39.964 228708 DEBUG nova.compute.manager [req-1d804afb-7c88-4b47-8190-a9d0fbcc8f56 req-0013050b-7449-45f2-ab17-2eaad767a757 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Detach interface failed, port_id=21840a13-6732-487c-8048-5f629bcfa4ff, reason: Instance ca79ca37-186d-411c-b60c-640a85d7c8a0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 04:54:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:40 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00de20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:40 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:54:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:40.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:41 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c004760 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:41.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:41 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:42.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:43 np0005534696 nova_compute[228704]: 2025-11-25 09:54:43.140 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00de20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:54:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:54:43 np0005534696 nova_compute[228704]: 2025-11-25 09:54:43.651 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:43 np0005534696 nova_compute[228704]: 2025-11-25 09:54:43.736 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:43.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c006ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:44 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:44 np0005534696 nova_compute[228704]: 2025-11-25 09:54:44.921 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:44.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:45 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:45 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00de20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c006ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:54:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:46.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:47 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:47 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:48 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00de20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:48 np0005534696 nova_compute[228704]: 2025-11-25 09:54:48.141 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:48 np0005534696 podman[232881]: 2025-11-25 09:54:48.335231465 +0000 UTC m=+0.043517072 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:54:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:49 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c006ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:54:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:49.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:54:49 np0005534696 nova_compute[228704]: 2025-11-25 09:54:49.922 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:49 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:50 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:50.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:51 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00de20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:51.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:51 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00de20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:52 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00de20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:52.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095453 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:54:53 np0005534696 nova_compute[228704]: 2025-11-25 09:54:53.123 228708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064478.1227667, ca79ca37-186d-411c-b60c-640a85d7c8a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:54:53 np0005534696 nova_compute[228704]: 2025-11-25 09:54:53.123 228708 INFO nova.compute.manager [-] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:54:53 np0005534696 nova_compute[228704]: 2025-11-25 09:54:53.138 228708 DEBUG nova.compute.manager [None req-a148e62d-a8f8-4f16-a419-39a53ed7fdc7 - - - - - -] [instance: ca79ca37-186d-411c-b60c-640a85d7c8a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:54:53 np0005534696 nova_compute[228704]: 2025-11-25 09:54:53.142 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:53 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00de20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:53.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:53 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:54 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c007050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:54:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:54:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:54:54 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:54:54 np0005534696 nova_compute[228704]: 2025-11-25 09:54:54.923 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:54.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:55 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:54:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:55.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:55 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:56 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff904005d30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:56.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:57 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c007050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:57 np0005534696 podman[233009]: 2025-11-25 09:54:57.641352185 +0000 UTC m=+0.090101885 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:54:57 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:54:57 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:54:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:57.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:57 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:58 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:58 np0005534696 nova_compute[228704]: 2025-11-25 09:54:58.144 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:54:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:59 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:54:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:54:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:54:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:54:59.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:54:59 np0005534696 nova_compute[228704]: 2025-11-25 09:54:59.925 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:54:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:54:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:54:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:54:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:54:59 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:00.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:01 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:01.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:01 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9000076b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:02 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:02.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:03 np0005534696 nova_compute[228704]: 2025-11-25 09:55:03.144 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:03 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:03.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:03 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:04 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9000076b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:04 np0005534696 podman[233065]: 2025-11-25 09:55:04.33015574 +0000 UTC m=+0.040809920 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 04:55:04 np0005534696 nova_compute[228704]: 2025-11-25 09:55:04.925 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:55:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:04.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:55:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:55:05.349 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:55:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:55:05.349 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:55:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:55:05.349 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:55:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:05 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:05.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:05 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:06 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:06.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:07 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9000076b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:55:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:07.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:55:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:07 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:08 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:08 np0005534696 nova_compute[228704]: 2025-11-25 09:55:08.145 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:08.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:09 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.869874) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509869897, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2399, "num_deletes": 251, "total_data_size": 6262093, "memory_usage": 6352000, "flush_reason": "Manual Compaction"}
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509878292, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4077759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20993, "largest_seqno": 23387, "table_properties": {"data_size": 4068072, "index_size": 6117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20087, "raw_average_key_size": 20, "raw_value_size": 4048608, "raw_average_value_size": 4101, "num_data_blocks": 267, "num_entries": 987, "num_filter_entries": 987, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064307, "oldest_key_time": 1764064307, "file_creation_time": 1764064509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8444 microseconds, and 5690 cpu microseconds.
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.878318) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4077759 bytes OK
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.878329) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.878811) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.878822) EVENT_LOG_v1 {"time_micros": 1764064509878820, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.878833) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6251541, prev total WAL file size 6251541, number of live WAL files 2.
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.879664) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(3982KB)], [39(11MB)]
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509879697, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 16563449, "oldest_snapshot_seqno": -1}
Nov 25 04:55:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:09.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5459 keys, 14404175 bytes, temperature: kUnknown
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509909041, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 14404175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14365687, "index_size": 23722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 137516, "raw_average_key_size": 25, "raw_value_size": 14264936, "raw_average_value_size": 2613, "num_data_blocks": 980, "num_entries": 5459, "num_filter_entries": 5459, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764064509, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.909177) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14404175 bytes
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.911971) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 563.7 rd, 490.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 11.9 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 5983, records dropped: 524 output_compression: NoCompression
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.911988) EVENT_LOG_v1 {"time_micros": 1764064509911981, "job": 22, "event": "compaction_finished", "compaction_time_micros": 29384, "compaction_time_cpu_micros": 20131, "output_level": 6, "num_output_files": 1, "total_output_size": 14404175, "num_input_records": 5983, "num_output_records": 5459, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509912614, "job": 22, "event": "table_file_deletion", "file_number": 41}
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064509913915, "job": 22, "event": "table_file_deletion", "file_number": 39}
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.879612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.913943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.913946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.913948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.913949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:55:09 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:55:09.913950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:55:09 np0005534696 nova_compute[228704]: 2025-11-25 09:55:09.927 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:09 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9000076b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:10 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0a70f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:10.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:11 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:11.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:11 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:12 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9000076b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:12.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:13 np0005534696 nova_compute[228704]: 2025-11-25 09:55:13.146 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:13 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0a70f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:13.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:13 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:14 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:14 np0005534696 nova_compute[228704]: 2025-11-25 09:55:14.929 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:14.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:15 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff900007850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:15.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:15 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff91c0a70f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:16 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:16.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:55:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:17 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.376 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.377 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.377 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:55:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:55:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4116000474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.712 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.907 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.908 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4928MB free_disk=59.94289016723633GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.908 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.908 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:55:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:17.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.951 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.951 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:55:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:17 np0005534696 nova_compute[228704]: 2025-11-25 09:55:17.968 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:55:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:17 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:18 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:18 np0005534696 nova_compute[228704]: 2025-11-25 09:55:18.147 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:55:18 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3968335134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:55:18 np0005534696 nova_compute[228704]: 2025-11-25 09:55:18.302 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:55:18 np0005534696 nova_compute[228704]: 2025-11-25 09:55:18.305 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:55:18 np0005534696 nova_compute[228704]: 2025-11-25 09:55:18.316 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:55:18 np0005534696 nova_compute[228704]: 2025-11-25 09:55:18.331 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:55:18 np0005534696 nova_compute[228704]: 2025-11-25 09:55:18.332 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:55:18 np0005534696 ovn_controller[133535]: 2025-11-25T09:55:18Z|00046|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Nov 25 04:55:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:18.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:19 np0005534696 podman[233146]: 2025-11-25 09:55:19.324319007 +0000 UTC m=+0.033831460 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:55:19 np0005534696 nova_compute[228704]: 2025-11-25 09:55:19.331 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:55:19 np0005534696 nova_compute[228704]: 2025-11-25 09:55:19.332 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:55:19 np0005534696 nova_compute[228704]: 2025-11-25 09:55:19.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:55:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff92404b820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:19.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:19 np0005534696 nova_compute[228704]: 2025-11-25 09:55:19.931 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:20 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c0071f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:20 np0005534696 nova_compute[228704]: 2025-11-25 09:55:20.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:55:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:20.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:21 np0005534696 nova_compute[228704]: 2025-11-25 09:55:21.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:55:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:21 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:21.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:21 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:22 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:22 np0005534696 nova_compute[228704]: 2025-11-25 09:55:22.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:55:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:55:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:22.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:55:23 np0005534696 nova_compute[228704]: 2025-11-25 09:55:23.149 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:23 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c007740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:23.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:23 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924054830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:24 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:24 np0005534696 nova_compute[228704]: 2025-11-25 09:55:24.352 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:55:24 np0005534696 nova_compute[228704]: 2025-11-25 09:55:24.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:55:24 np0005534696 nova_compute[228704]: 2025-11-25 09:55:24.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:55:24 np0005534696 nova_compute[228704]: 2025-11-25 09:55:24.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:55:24 np0005534696 nova_compute[228704]: 2025-11-25 09:55:24.369 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:55:24 np0005534696 nova_compute[228704]: 2025-11-25 09:55:24.933 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:24.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:25 np0005534696 nova_compute[228704]: 2025-11-25 09:55:25.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:55:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:25.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c007740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:26 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924054830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:26 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:55:26.688 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:55:26 np0005534696 nova_compute[228704]: 2025-11-25 09:55:26.689 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:26 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:55:26.690 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:55:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:26.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095527 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:55:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:27 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:27.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:27 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:28 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c007740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:28 np0005534696 nova_compute[228704]: 2025-11-25 09:55:28.150 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:28 np0005534696 podman[233196]: 2025-11-25 09:55:28.356924288 +0000 UTC m=+0.063210492 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 04:55:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:28.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:29 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924054830 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:29.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:29 np0005534696 nova_compute[228704]: 2025-11-25 09:55:29.933 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:29 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:30 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:55:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:30.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:55:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c007740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:31 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:55:31.691 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:55:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:31.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924013420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:32 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:32.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:33 np0005534696 nova_compute[228704]: 2025-11-25 09:55:33.151 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:33 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:33.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:33 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c007740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:34 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924013420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:34 np0005534696 nova_compute[228704]: 2025-11-25 09:55:34.935 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:55:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:34.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:55:35 np0005534696 podman[233226]: 2025-11-25 09:55:35.354086468 +0000 UTC m=+0.051090424 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:55:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:35 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:35.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:35 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:36 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c007740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:36 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:55:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:36.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924013420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:37.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:38 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:38 np0005534696 nova_compute[228704]: 2025-11-25 09:55:38.153 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:38.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff90c007740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:55:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:55:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:55:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:55:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:39.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:55:39 np0005534696 nova_compute[228704]: 2025-11-25 09:55:39.936 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924013420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:40 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff928002e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:55:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:40.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:55:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:41 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924013420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:41.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e4f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff928002e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:55:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:43.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:43 np0005534696 nova_compute[228704]: 2025-11-25 09:55:43.154 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff928002e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:55:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:43.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:55:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:44 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924013420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:44 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:44 np0005534696 nova_compute[228704]: 2025-11-25 09:55:44.937 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:55:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:45.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:55:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:45 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff928002e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:45.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924013420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:55:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:47.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:55:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:47 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:47.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:48 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:48 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:48 np0005534696 nova_compute[228704]: 2025-11-25 09:55:48.155 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:55:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:49.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:55:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:49 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:49 np0005534696 nova_compute[228704]: 2025-11-25 09:55:49.939 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:49.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:50 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff928004a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:50 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff930004e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:50 np0005534696 podman[233285]: 2025-11-25 09:55:50.339279313 +0000 UTC m=+0.045913428 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 04:55:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:51.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:51 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:55:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:51.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:55:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:52 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:52 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff928004a30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:53.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:53 np0005534696 nova_compute[228704]: 2025-11-25 09:55:53.157 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:53 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff930005940 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 04:55:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/785351946' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:55:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 04:55:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/785351946' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:55:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:53.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:54 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:54 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:54 np0005534696 nova_compute[228704]: 2025-11-25 09:55:54.940 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:55:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:55.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:55:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:55 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff928005740 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:55:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:55:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:55.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:55:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:56 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff930005920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:56 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:57.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:57 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:55:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:57.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:55:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:58 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9280058c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:58 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff930005920 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:58 np0005534696 nova_compute[228704]: 2025-11-25 09:55:58.158 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 04:55:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 04:55:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 04:55:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:55:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:55:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:55:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:55:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:55:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:55:59.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:55:59 np0005534696 podman[233390]: 2025-11-25 09:55:59.387679828 +0000 UTC m=+0.094700640 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 25 04:55:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:55:59 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:55:59 np0005534696 nova_compute[228704]: 2025-11-25 09:55:59.943 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:55:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:55:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:55:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:55:59.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:55:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:55:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:55:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:55:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9280058c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:56:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:01.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:01 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff930006db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:01.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:02 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:02 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:56:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:56:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:03.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:03 np0005534696 nova_compute[228704]: 2025-11-25 09:56:03.159 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:03 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9280058c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:03 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:56:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:03 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:56:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:03.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:04 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff930006db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:04 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:04 np0005534696 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 04:56:04 np0005534696 nova_compute[228704]: 2025-11-25 09:56:04.943 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:05.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:56:05.350 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:56:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:56:05.353 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:56:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:56:05.354 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:56:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:05 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:05.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:06 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9280058c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:06 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9280058c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:06 np0005534696 podman[233471]: 2025-11-25 09:56:06.335302274 +0000 UTC m=+0.046509962 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 04:56:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:06 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:56:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:07 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00e6d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:07.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:08 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:08 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff930007ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:08 np0005534696 nova_compute[228704]: 2025-11-25 09:56:08.159 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:56:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:09.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:56:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:09 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9280058c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:09 np0005534696 nova_compute[228704]: 2025-11-25 09:56:09.945 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:09.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:10 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00ec20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:10 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00ec20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:11.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:11 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff930007ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:11.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:12 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff928005a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:12 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:13.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095613 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:56:13 np0005534696 nova_compute[228704]: 2025-11-25 09:56:13.160 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:13 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:13.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:14 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff930007ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:14 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff928005a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:14 np0005534696 nova_compute[228704]: 2025-11-25 09:56:14.945 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:15.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:15 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:15.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:16 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:16 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:16 np0005534696 nova_compute[228704]: 2025-11-25 09:56:16.352 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:56:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:17.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:17 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:17.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:18 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff940003820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:18 np0005534696 nova_compute[228704]: 2025-11-25 09:56:18.162 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:18 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:18 np0005534696 nova_compute[228704]: 2025-11-25 09:56:18.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:56:18 np0005534696 nova_compute[228704]: 2025-11-25 09:56:18.389 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:56:18 np0005534696 nova_compute[228704]: 2025-11-25 09:56:18.389 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:56:18 np0005534696 nova_compute[228704]: 2025-11-25 09:56:18.389 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:56:18 np0005534696 nova_compute[228704]: 2025-11-25 09:56:18.390 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:56:18 np0005534696 nova_compute[228704]: 2025-11-25 09:56:18.390 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:56:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:56:18 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2161517121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:56:18 np0005534696 nova_compute[228704]: 2025-11-25 09:56:18.780 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:56:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.025 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.026 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4940MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.026 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.027 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:56:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:19.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.132 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.132 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.147 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:56:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00ec20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:19 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:56:19 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1833756951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.523 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.528 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.541 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.542 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.542 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:56:19 np0005534696 nova_compute[228704]: 2025-11-25 09:56:19.949 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:19.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:20 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00ec20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:20 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff940004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:21.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:21 np0005534696 podman[233549]: 2025-11-25 09:56:21.354182311 +0000 UTC m=+0.063665667 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 25 04:56:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:21 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:21 np0005534696 nova_compute[228704]: 2025-11-25 09:56:21.544 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:56:21 np0005534696 nova_compute[228704]: 2025-11-25 09:56:21.544 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:56:21 np0005534696 nova_compute[228704]: 2025-11-25 09:56:21.544 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:56:21 np0005534696 nova_compute[228704]: 2025-11-25 09:56:21.544 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:56:21 np0005534696 nova_compute[228704]: 2025-11-25 09:56:21.545 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:56:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:22 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:22 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:23.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:23 np0005534696 nova_compute[228704]: 2025-11-25 09:56:23.163 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:23 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff940004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:23.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:24 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:24 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c003a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:24 np0005534696 nova_compute[228704]: 2025-11-25 09:56:24.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:56:24 np0005534696 nova_compute[228704]: 2025-11-25 09:56:24.952 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:25.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:25 np0005534696 nova_compute[228704]: 2025-11-25 09:56:25.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:56:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:25.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:26 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff940004360 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:26 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:26 np0005534696 nova_compute[228704]: 2025-11-25 09:56:26.352 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:56:26 np0005534696 nova_compute[228704]: 2025-11-25 09:56:26.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:56:26 np0005534696 nova_compute[228704]: 2025-11-25 09:56:26.355 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:56:26 np0005534696 nova_compute[228704]: 2025-11-25 09:56:26.355 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:56:26 np0005534696 nova_compute[228704]: 2025-11-25 09:56:26.371 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:56:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:27.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:27 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c003a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:27.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:28 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:28 np0005534696 nova_compute[228704]: 2025-11-25 09:56:28.164 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:28 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9400057d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:29.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:29 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:29 np0005534696 nova_compute[228704]: 2025-11-25 09:56:29.953 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:29.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:30 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c003a60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:30 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:30 np0005534696 podman[233600]: 2025-11-25 09:56:30.389854466 +0000 UTC m=+0.080974450 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 04:56:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:31.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:31.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:32 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:32 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:33.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:33 np0005534696 nova_compute[228704]: 2025-11-25 09:56:33.165 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:56:33.249 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:56:33 np0005534696 nova_compute[228704]: 2025-11-25 09:56:33.249 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:33 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:56:33.250 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:56:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:33 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9400057d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:33.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:34 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:34 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:34 np0005534696 nova_compute[228704]: 2025-11-25 09:56:34.954 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:35 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:35.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:36 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:36 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c004d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:37.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:37 np0005534696 podman[233630]: 2025-11-25 09:56:37.340770795 +0000 UTC m=+0.049690672 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 04:56:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:37.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:38 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9400064e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:38 np0005534696 nova_compute[228704]: 2025-11-25 09:56:38.166 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:38 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:38 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:56:38.253 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:56:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:39.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c004d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:39 np0005534696 nova_compute[228704]: 2025-11-25 09:56:39.956 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:40.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:40 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:40 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9400064e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:41.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:41 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:42.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:43.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:43 np0005534696 nova_compute[228704]: 2025-11-25 09:56:43.167 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9400064e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:44.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:44 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:44 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:44 np0005534696 nova_compute[228704]: 2025-11-25 09:56:44.957 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:45.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:45 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:46.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9400064e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:47.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:47 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:48.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:48 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:48 np0005534696 nova_compute[228704]: 2025-11-25 09:56:48.169 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:48 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:49.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:49 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:49 np0005534696 nova_compute[228704]: 2025-11-25 09:56:49.959 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:50.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:50 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:50 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff94c0bfaf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:51.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:51 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:52.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:52 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:52 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:52 np0005534696 podman[233691]: 2025-11-25 09:56:52.341398901 +0000 UTC m=+0.041132700 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 04:56:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:56:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:53.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:56:53 np0005534696 nova_compute[228704]: 2025-11-25 09:56:53.170 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:53 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 04:56:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3382139923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 04:56:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 04:56:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3382139923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 04:56:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:54.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:54 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948002600 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:54 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:54 np0005534696 nova_compute[228704]: 2025-11-25 09:56:54.960 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:55.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:55 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:56:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:56:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:56.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:56:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:56 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:56 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:57.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:57 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:56:58.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:58 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:58 np0005534696 nova_compute[228704]: 2025-11-25 09:56:58.171 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:58 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:56:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:56:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:56:59.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:56:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:56:59 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:56:59 np0005534696 nova_compute[228704]: 2025-11-25 09:56:59.962 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:56:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:56:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:56:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:56:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:00.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:57:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:01.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:57:01 np0005534696 podman[233716]: 2025-11-25 09:57:01.370493445 +0000 UTC m=+0.077310088 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:57:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:01 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:57:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:02.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:57:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:02 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:02 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:02 np0005534696 podman[233871]: 2025-11-25 09:57:02.718543911 +0000 UTC m=+0.049085262 container exec 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 04:57:02 np0005534696 podman[233871]: 2025-11-25 09:57:02.803026478 +0000 UTC m=+0.133567829 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:57:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:03.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:03 np0005534696 nova_compute[228704]: 2025-11-25 09:57:03.172 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:03 np0005534696 podman[233984]: 2025-11-25 09:57:03.206562899 +0000 UTC m=+0.041209385 container exec 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:57:03 np0005534696 podman[233984]: 2025-11-25 09:57:03.213941728 +0000 UTC m=+0.048588224 container exec_died 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 04:57:03 np0005534696 podman[234053]: 2025-11-25 09:57:03.418035844 +0000 UTC m=+0.038558603 container exec 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:57:03 np0005534696 podman[234053]: 2025-11-25 09:57:03.429891377 +0000 UTC m=+0.050414135 container exec_died 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 04:57:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:03 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:03 np0005534696 podman[234105]: 2025-11-25 09:57:03.582068629 +0000 UTC m=+0.038092474 container exec 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.buildah.version=1.28.2, name=keepalived, io.openshift.expose-services=)
Nov 25 04:57:03 np0005534696 podman[234105]: 2025-11-25 09:57:03.611786978 +0000 UTC m=+0.067810823 container exec_died 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., name=keepalived, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived)
Nov 25 04:57:03 np0005534696 podman[234146]: 2025-11-25 09:57:03.735863507 +0000 UTC m=+0.038950891 container exec 4b16e2c7ab5313d4b0a2c0091ef1011873f465d6fd9c16f3db1253b399dcde09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 04:57:03 np0005534696 podman[234146]: 2025-11-25 09:57:03.747872238 +0000 UTC m=+0.050959601 container exec_died 4b16e2c7ab5313d4b0a2c0091ef1011873f465d6fd9c16f3db1253b399dcde09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 25 04:57:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:04.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:04 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:04 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:57:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:57:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:57:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:57:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:57:04 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:57:04 np0005534696 nova_compute[228704]: 2025-11-25 09:57:04.963 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:57:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:05.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:57:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:57:05.350 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:57:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:57:05.350 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:57:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:57:05.350 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:57:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:05 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:06.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:06 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:06 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:07.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:07 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095707 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:57:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:07 np0005534696 podman[234311]: 2025-11-25 09:57:07.978800401 +0000 UTC m=+0.069703178 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 04:57:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:08.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:08 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:08 np0005534696 nova_compute[228704]: 2025-11-25 09:57:08.173 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:08 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:57:08 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:57:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:09.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:09 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:09 np0005534696 nova_compute[228704]: 2025-11-25 09:57:09.964 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:10.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:10 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:10 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:11.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:11 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924049350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:12.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:12 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc00aa70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:12 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:13.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:13 np0005534696 nova_compute[228704]: 2025-11-25 09:57:13.174 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:13 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:14.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:14 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:14 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:14 np0005534696 nova_compute[228704]: 2025-11-25 09:57:14.965 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:57:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:15.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:57:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:15 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:16.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:16 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:16 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:16 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:57:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:17.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:17 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:18.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:18 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:18 np0005534696 nova_compute[228704]: 2025-11-25 09:57:18.175 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:18 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:19.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:57:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:57:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:57:19 np0005534696 nova_compute[228704]: 2025-11-25 09:57:19.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:57:19 np0005534696 nova_compute[228704]: 2025-11-25 09:57:19.373 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:57:19 np0005534696 nova_compute[228704]: 2025-11-25 09:57:19.373 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:57:19 np0005534696 nova_compute[228704]: 2025-11-25 09:57:19.373 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:57:19 np0005534696 nova_compute[228704]: 2025-11-25 09:57:19.373 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:57:19 np0005534696 nova_compute[228704]: 2025-11-25 09:57:19.373 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:57:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:19 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:57:19 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3818507626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:57:19 np0005534696 nova_compute[228704]: 2025-11-25 09:57:19.716 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:57:19 np0005534696 nova_compute[228704]: 2025-11-25 09:57:19.965 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:19.999 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:20.000 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4922MB free_disk=59.89723205566406GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:20.000 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:20.001 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:57:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:20.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:20.055 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:20.055 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:20.074 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:57:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:20 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:20 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:57:20 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2969080741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:20.416 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:20.420 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:20.432 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:20.433 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:57:20 np0005534696 nova_compute[228704]: 2025-11-25 09:57:20.434 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:57:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:21.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:21 np0005534696 nova_compute[228704]: 2025-11-25 09:57:21.434 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:57:21 np0005534696 nova_compute[228704]: 2025-11-25 09:57:21.434 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:57:21 np0005534696 nova_compute[228704]: 2025-11-25 09:57:21.434 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:57:21 np0005534696 nova_compute[228704]: 2025-11-25 09:57:21.435 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:57:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:21 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:22.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:22 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:22 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:22 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 25 04:57:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:23 np0005534696 nova_compute[228704]: 2025-11-25 09:57:23.176 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:23 np0005534696 podman[234413]: 2025-11-25 09:57:23.324111026 +0000 UTC m=+0.035173316 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 04:57:23 np0005534696 nova_compute[228704]: 2025-11-25 09:57:23.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:57:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:23 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:24.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:24 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948003a40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:24 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff95c004760 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:24 np0005534696 nova_compute[228704]: 2025-11-25 09:57:24.967 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:25.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:25 np0005534696 nova_compute[228704]: 2025-11-25 09:57:25.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:57:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:26.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:26 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:26 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:26 np0005534696 nova_compute[228704]: 2025-11-25 09:57:26.357 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:57:26 np0005534696 nova_compute[228704]: 2025-11-25 09:57:26.358 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:57:26 np0005534696 nova_compute[228704]: 2025-11-25 09:57:26.358 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:57:26 np0005534696 nova_compute[228704]: 2025-11-25 09:57:26.372 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:57:26 np0005534696 nova_compute[228704]: 2025-11-25 09:57:26.373 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:57:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:57:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:27.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:57:27 np0005534696 nova_compute[228704]: 2025-11-25 09:57:27.367 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:57:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:27 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff95c0052a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095727 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.955845) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647955863, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1641, "num_deletes": 250, "total_data_size": 4005305, "memory_usage": 4068264, "flush_reason": "Manual Compaction"}
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647960542, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1602202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23392, "largest_seqno": 25028, "table_properties": {"data_size": 1597003, "index_size": 2403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13718, "raw_average_key_size": 20, "raw_value_size": 1585615, "raw_average_value_size": 2373, "num_data_blocks": 105, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064510, "oldest_key_time": 1764064510, "file_creation_time": 1764064647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 4724 microseconds, and 3588 cpu microseconds.
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.960567) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1602202 bytes OK
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.960579) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.960892) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.960903) EVENT_LOG_v1 {"time_micros": 1764064647960901, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.960912) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3997768, prev total WAL file size 3997768, number of live WAL files 2.
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.961533) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1564KB)], [42(13MB)]
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647961561, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16006377, "oldest_snapshot_seqno": -1}
Nov 25 04:57:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5674 keys, 12928883 bytes, temperature: kUnknown
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647994243, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 12928883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12891795, "index_size": 21810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 142345, "raw_average_key_size": 25, "raw_value_size": 12790220, "raw_average_value_size": 2254, "num_data_blocks": 895, "num_entries": 5674, "num_filter_entries": 5674, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764064647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.994383) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12928883 bytes
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.994721) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 489.3 rd, 395.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.7 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(18.1) write-amplify(8.1) OK, records in: 6127, records dropped: 453 output_compression: NoCompression
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.994743) EVENT_LOG_v1 {"time_micros": 1764064647994734, "job": 24, "event": "compaction_finished", "compaction_time_micros": 32716, "compaction_time_cpu_micros": 22558, "output_level": 6, "num_output_files": 1, "total_output_size": 12928883, "num_input_records": 6127, "num_output_records": 5674, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647995010, "job": 24, "event": "table_file_deletion", "file_number": 44}
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064647996812, "job": 24, "event": "table_file_deletion", "file_number": 42}
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.961480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.996834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.996836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.996837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.996838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:27 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:27.996839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:28.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:28 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9640027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:28 np0005534696 nova_compute[228704]: 2025-11-25 09:57:28.177 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:28 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:57:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:29.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:57:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:29 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:29 np0005534696 nova_compute[228704]: 2025-11-25 09:57:29.968 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:30.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:30 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff95c0052a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:30 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9640027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:31.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:57:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:32.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:57:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:32 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:32 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff95c0052a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:32 np0005534696 podman[234440]: 2025-11-25 09:57:32.343255069 +0000 UTC m=+0.054819103 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 04:57:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:33.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:33 np0005534696 nova_compute[228704]: 2025-11-25 09:57:33.179 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:33 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9640027d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:34.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:34 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:34 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:34 np0005534696 nova_compute[228704]: 2025-11-25 09:57:34.970 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:35.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:35 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff95c006710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:36.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:36 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9640040b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:36 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:37.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:38.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:38 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:38 np0005534696 nova_compute[228704]: 2025-11-25 09:57:38.181 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:38 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9640040b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:38 np0005534696 podman[234469]: 2025-11-25 09:57:38.32726344 +0000 UTC m=+0.037525897 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 04:57:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:39.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:39 np0005534696 nova_compute[228704]: 2025-11-25 09:57:39.972 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:40.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:40 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:40 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924041420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:41.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:41 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff964004dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:42.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff95c007030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:43 np0005534696 nova_compute[228704]: 2025-11-25 09:57:43.183 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:44.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:44 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff964004dc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:44 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff95c007030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:44 np0005534696 nova_compute[228704]: 2025-11-25 09:57:44.975 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:45.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:45 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:57:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:46.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:57:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff964005ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:47.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:47 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff95c007030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:48.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:48 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:48 np0005534696 nova_compute[228704]: 2025-11-25 09:57:48.185 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:48 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:49.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:49 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:49 np0005534696 nova_compute[228704]: 2025-11-25 09:57:49.977 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:50.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:50 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff95c007030 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:50 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:51.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:51 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_31] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:52.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:52 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:52 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.962582) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672962606, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 476, "num_deletes": 257, "total_data_size": 640343, "memory_usage": 650072, "flush_reason": "Manual Compaction"}
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672965139, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 422803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25033, "largest_seqno": 25504, "table_properties": {"data_size": 420215, "index_size": 624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 5728, "raw_average_key_size": 17, "raw_value_size": 415138, "raw_average_value_size": 1246, "num_data_blocks": 28, "num_entries": 333, "num_filter_entries": 333, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064648, "oldest_key_time": 1764064648, "file_creation_time": 1764064672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 2587 microseconds, and 1498 cpu microseconds.
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965167) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 422803 bytes OK
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965179) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965547) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965561) EVENT_LOG_v1 {"time_micros": 1764064672965558, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965571) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 637450, prev total WAL file size 637450, number of live WAL files 2.
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965843) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(412KB)], [45(12MB)]
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672965860, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 13351686, "oldest_snapshot_seqno": -1}
Nov 25 04:57:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5485 keys, 13189100 bytes, temperature: kUnknown
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672996053, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13189100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13152539, "index_size": 21731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 139582, "raw_average_key_size": 25, "raw_value_size": 13053525, "raw_average_value_size": 2379, "num_data_blocks": 886, "num_entries": 5485, "num_filter_entries": 5485, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764064672, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.996257) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13189100 bytes
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.996602) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 441.3 rd, 435.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 12.3 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(62.8) write-amplify(31.2) OK, records in: 6007, records dropped: 522 output_compression: NoCompression
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.996617) EVENT_LOG_v1 {"time_micros": 1764064672996609, "job": 26, "event": "compaction_finished", "compaction_time_micros": 30255, "compaction_time_cpu_micros": 22556, "output_level": 6, "num_output_files": 1, "total_output_size": 13189100, "num_input_records": 6007, "num_output_records": 5485, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672996776, "job": 26, "event": "table_file_deletion", "file_number": 47}
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064672998174, "job": 26, "event": "table_file_deletion", "file_number": 45}
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.965815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.998235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.998240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.998242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.998243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:52 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:57:52.998244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:57:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:53.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:53 np0005534696 nova_compute[228704]: 2025-11-25 09:57:53.187 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:53 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9680040b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:54.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:54 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff968002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:54 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:54 np0005534696 podman[234529]: 2025-11-25 09:57:54.357072676 +0000 UTC m=+0.064652435 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 04:57:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:54 np0005534696 nova_compute[228704]: 2025-11-25 09:57:54.979 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:55.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:55 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:57:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:57:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:56.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:57:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:56 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:56 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:57.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:57 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:57:58.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:58 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:58 np0005534696 nova_compute[228704]: 2025-11-25 09:57:58.189 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:57:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:58 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:57:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:57:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:57:59.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:57:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:57:59 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948005570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:57:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:57:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:57:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:57:59 np0005534696 nova_compute[228704]: 2025-11-25 09:57:59.981 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:00.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff948005570 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:01.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:01 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:02 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:02 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:02 np0005534696 podman[234577]: 2025-11-25 09:58:02.50721829 +0000 UTC m=+0.060511365 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 04:58:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:03.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:03 np0005534696 nova_compute[228704]: 2025-11-25 09:58:03.191 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:03 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:04.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:04 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:04 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff968002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:04 np0005534696 nova_compute[228704]: 2025-11-25 09:58:04.983 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:58:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:05.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:58:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:05.354 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:05.355 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:05.356 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:05 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:06.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:06 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:06 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:07.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:07 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff968002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:08.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:08 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:08 np0005534696 nova_compute[228704]: 2025-11-25 09:58:08.193 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:08 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 04:58:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:09.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 04:58:09 np0005534696 podman[234689]: 2025-11-25 09:58:09.346234104 +0000 UTC m=+0.051159388 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 04:58:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:09 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:09 np0005534696 nova_compute[228704]: 2025-11-25 09:58:09.985 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:58:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:10.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:58:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:10 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:10 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:10 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:58:10 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:58:10 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:58:10 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:58:10 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:58:10 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:58:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:11 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000008s ======
Nov 25 04:58:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:12.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Nov 25 04:58:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:12 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff968002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:12 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff968002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:58:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:13.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:58:13 np0005534696 nova_compute[228704]: 2025-11-25 09:58:13.194 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:13 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:14 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:14 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:58:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:58:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:14 np0005534696 nova_compute[228704]: 2025-11-25 09:58:14.990 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:15.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:15 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:16.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:16 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:16 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:16 np0005534696 nova_compute[228704]: 2025-11-25 09:58:16.357 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:17.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:17 np0005534696 nova_compute[228704]: 2025-11-25 09:58:17.365 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:17 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff968002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:17 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:17.866 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:58:17 np0005534696 nova_compute[228704]: 2025-11-25 09:58:17.866 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:17 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:17.867 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:58:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:58:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:18.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:58:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:18 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:18 np0005534696 nova_compute[228704]: 2025-11-25 09:58:18.196 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:18 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:19.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.377 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.377 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:58:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:19 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:19 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:58:19 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1899497758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.730 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.914 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.915 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4919MB free_disk=59.89691925048828GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.916 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.916 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:19 np0005534696 nova_compute[228704]: 2025-11-25 09:58:19.993 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:20 np0005534696 nova_compute[228704]: 2025-11-25 09:58:20.018 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:58:20 np0005534696 nova_compute[228704]: 2025-11-25 09:58:20.019 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:58:20 np0005534696 nova_compute[228704]: 2025-11-25 09:58:20.089 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:58:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:20.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:20 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff968002ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:20 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:58:20 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/323938496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:58:20 np0005534696 nova_compute[228704]: 2025-11-25 09:58:20.434 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:58:20 np0005534696 nova_compute[228704]: 2025-11-25 09:58:20.438 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:58:20 np0005534696 nova_compute[228704]: 2025-11-25 09:58:20.450 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:58:20 np0005534696 nova_compute[228704]: 2025-11-25 09:58:20.451 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:58:20 np0005534696 nova_compute[228704]: 2025-11-25 09:58:20.451 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:21.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:21 np0005534696 nova_compute[228704]: 2025-11-25 09:58:21.452 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:21 np0005534696 nova_compute[228704]: 2025-11-25 09:58:21.452 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:21 np0005534696 nova_compute[228704]: 2025-11-25 09:58:21.452 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:21 np0005534696 nova_compute[228704]: 2025-11-25 09:58:21.452 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:58:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:21 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:58:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:22.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:58:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:22 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff93c005aa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:22 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff96801a770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:22 np0005534696 nova_compute[228704]: 2025-11-25 09:58:22.357 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:22 np0005534696 nova_compute[228704]: 2025-11-25 09:58:22.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 04:58:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:23.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:23 np0005534696 nova_compute[228704]: 2025-11-25 09:58:23.198 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:23 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:23 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:23.870 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:58:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:58:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:24.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:58:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:24 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:24 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9700040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:24 np0005534696 nova_compute[228704]: 2025-11-25 09:58:24.376 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:24 np0005534696 nova_compute[228704]: 2025-11-25 09:58:24.376 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:24 np0005534696 nova_compute[228704]: 2025-11-25 09:58:24.376 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 04:58:24 np0005534696 nova_compute[228704]: 2025-11-25 09:58:24.387 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 04:58:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:24 np0005534696 nova_compute[228704]: 2025-11-25 09:58:24.995 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:25.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:25 np0005534696 podman[234817]: 2025-11-25 09:58:25.326233768 +0000 UTC m=+0.037265106 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 04:58:25 np0005534696 nova_compute[228704]: 2025-11-25 09:58:25.368 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:25 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff96801a770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:26.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:26 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:26 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:26 np0005534696 nova_compute[228704]: 2025-11-25 09:58:26.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:27.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:27 np0005534696 nova_compute[228704]: 2025-11-25 09:58:27.357 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:27 np0005534696 nova_compute[228704]: 2025-11-25 09:58:27.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:58:27 np0005534696 nova_compute[228704]: 2025-11-25 09:58:27.358 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:58:27 np0005534696 nova_compute[228704]: 2025-11-25 09:58:27.369 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:58:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:27 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9700040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:28.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:28 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9700040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:28 np0005534696 nova_compute[228704]: 2025-11-25 09:58:28.200 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:28 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:29.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:29 np0005534696 nova_compute[228704]: 2025-11-25 09:58:29.364 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:58:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:29 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:29 np0005534696 nova_compute[228704]: 2025-11-25 09:58:29.996 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:30.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:30 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:30 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff96801a770 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:31.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:31 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:32.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:32 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:32 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:58:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:58:33 np0005534696 nova_compute[228704]: 2025-11-25 09:58:33.201 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:33 np0005534696 podman[234843]: 2025-11-25 09:58:33.346545863 +0000 UTC m=+0.058737893 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 04:58:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:33 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff980002e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:34 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:34 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:34 np0005534696 nova_compute[228704]: 2025-11-25 09:58:34.997 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:35.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:35 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095835 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:58:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:36.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:36 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9800039b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:36 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:37.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095837 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:58:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:37 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:38.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:38 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:38 np0005534696 nova_compute[228704]: 2025-11-25 09:58:38.203 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:38 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:39.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:39 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9800042d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:39 np0005534696 nova_compute[228704]: 2025-11-25 09:58:39.998 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:40.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:40 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:40 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:40 np0005534696 podman[234873]: 2025-11-25 09:58:40.332308582 +0000 UTC m=+0.044537989 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 04:58:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:41.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:41 np0005534696 nova_compute[228704]: 2025-11-25 09:58:41.549 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "2263bfd6-6d17-4f29-80a6-4de684c71b20" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:41 np0005534696 nova_compute[228704]: 2025-11-25 09:58:41.549 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:41 np0005534696 nova_compute[228704]: 2025-11-25 09:58:41.568 228708 DEBUG nova.compute.manager [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 04:58:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:41 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:41 np0005534696 nova_compute[228704]: 2025-11-25 09:58:41.667 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:41 np0005534696 nova_compute[228704]: 2025-11-25 09:58:41.667 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.685204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721685230, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 751, "num_deletes": 251, "total_data_size": 1539645, "memory_usage": 1564592, "flush_reason": "Manual Compaction"}
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721688991, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1017328, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25509, "largest_seqno": 26255, "table_properties": {"data_size": 1013658, "index_size": 1514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8548, "raw_average_key_size": 19, "raw_value_size": 1006258, "raw_average_value_size": 2323, "num_data_blocks": 66, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064673, "oldest_key_time": 1764064673, "file_creation_time": 1764064721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 3813 microseconds, and 2804 cpu microseconds.
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.689016) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1017328 bytes OK
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.689028) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.689356) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.689369) EVENT_LOG_v1 {"time_micros": 1764064721689366, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.689379) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1535637, prev total WAL file size 1535637, number of live WAL files 2.
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.689785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(993KB)], [48(12MB)]
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721689833, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 14206428, "oldest_snapshot_seqno": -1}
Nov 25 04:58:41 np0005534696 nova_compute[228704]: 2025-11-25 09:58:41.713 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 04:58:41 np0005534696 nova_compute[228704]: 2025-11-25 09:58:41.713 228708 INFO nova.compute.claims [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5400 keys, 12081286 bytes, temperature: kUnknown
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721716539, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 12081286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12046258, "index_size": 20454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 138510, "raw_average_key_size": 25, "raw_value_size": 11949663, "raw_average_value_size": 2212, "num_data_blocks": 829, "num_entries": 5400, "num_filter_entries": 5400, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764064721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.716696) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12081286 bytes
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.717223) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 531.4 rd, 451.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.6 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(25.8) write-amplify(11.9) OK, records in: 5918, records dropped: 518 output_compression: NoCompression
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.717238) EVENT_LOG_v1 {"time_micros": 1764064721717231, "job": 28, "event": "compaction_finished", "compaction_time_micros": 26734, "compaction_time_cpu_micros": 21767, "output_level": 6, "num_output_files": 1, "total_output_size": 12081286, "num_input_records": 5918, "num_output_records": 5400, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721717419, "job": 28, "event": "table_file_deletion", "file_number": 50}
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064721718898, "job": 28, "event": "table_file_deletion", "file_number": 48}
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.689710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.718917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.718919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.718920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.718921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:58:41 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-09:58:41.718922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 04:58:41 np0005534696 nova_compute[228704]: 2025-11-25 09:58:41.783 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:58:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:42 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:58:42 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/110315522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.125 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.128 228708 DEBUG nova.compute.provider_tree [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:58:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.143 228708 DEBUG nova.scheduler.client.report [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.169 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.170 228708 DEBUG nova.compute.manager [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 04:58:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.225 228708 DEBUG nova.compute.manager [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.226 228708 DEBUG nova.network.neutron [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.241 228708 INFO nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 04:58:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:42 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.255 228708 DEBUG nova.compute.manager [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.324 228708 DEBUG nova.compute.manager [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.324 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.325 228708 INFO nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Creating image(s)#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.345 228708 DEBUG nova.storage.rbd_utils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.365 228708 DEBUG nova.storage.rbd_utils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.386 228708 DEBUG nova.storage.rbd_utils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.389 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.435 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.436 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.437 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.437 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.457 228708 DEBUG nova.storage.rbd_utils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.460 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.598 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.646 228708 DEBUG nova.storage.rbd_utils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.705 228708 DEBUG nova.objects.instance [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid 2263bfd6-6d17-4f29-80a6-4de684c71b20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.715 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.715 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Ensure instance console log exists: /var/lib/nova/instances/2263bfd6-6d17-4f29-80a6-4de684c71b20/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.715 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.715 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:42 np0005534696 nova_compute[228704]: 2025-11-25 09:58:42.716 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:43.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:43 np0005534696 nova_compute[228704]: 2025-11-25 09:58:43.204 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:43 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:44.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:44 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:44 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:45 np0005534696 nova_compute[228704]: 2025-11-25 09:58:44.999 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:45 np0005534696 nova_compute[228704]: 2025-11-25 09:58:45.341 228708 DEBUG nova.policy [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 04:58:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:45 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:46.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:46 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:47 np0005534696 nova_compute[228704]: 2025-11-25 09:58:47.112 228708 DEBUG nova.network.neutron [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Successfully updated port: 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 04:58:47 np0005534696 nova_compute[228704]: 2025-11-25 09:58:47.133 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-2263bfd6-6d17-4f29-80a6-4de684c71b20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:58:47 np0005534696 nova_compute[228704]: 2025-11-25 09:58:47.134 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-2263bfd6-6d17-4f29-80a6-4de684c71b20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:58:47 np0005534696 nova_compute[228704]: 2025-11-25 09:58:47.134 228708 DEBUG nova.network.neutron [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 04:58:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:47.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:47 np0005534696 nova_compute[228704]: 2025-11-25 09:58:47.203 228708 DEBUG nova.compute.manager [req-f4b0d6f8-8dd7-4934-9a3d-d2c0eae47bd3 req-6ad33e5b-c9dc-438b-bc18-1b0ccbf5f32a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Received event network-changed-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:58:47 np0005534696 nova_compute[228704]: 2025-11-25 09:58:47.203 228708 DEBUG nova.compute.manager [req-f4b0d6f8-8dd7-4934-9a3d-d2c0eae47bd3 req-6ad33e5b-c9dc-438b-bc18-1b0ccbf5f32a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Refreshing instance network info cache due to event network-changed-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:58:47 np0005534696 nova_compute[228704]: 2025-11-25 09:58:47.203 228708 DEBUG oslo_concurrency.lockutils [req-f4b0d6f8-8dd7-4934-9a3d-d2c0eae47bd3 req-6ad33e5b-c9dc-438b-bc18-1b0ccbf5f32a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-2263bfd6-6d17-4f29-80a6-4de684c71b20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:58:47 np0005534696 nova_compute[228704]: 2025-11-25 09:58:47.254 228708 DEBUG nova.network.neutron [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 04:58:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:47 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:48.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:48 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.206 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:48 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.561 228708 DEBUG nova.network.neutron [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Updating instance_info_cache with network_info: [{"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.575 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-2263bfd6-6d17-4f29-80a6-4de684c71b20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.576 228708 DEBUG nova.compute.manager [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Instance network_info: |[{"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.576 228708 DEBUG oslo_concurrency.lockutils [req-f4b0d6f8-8dd7-4934-9a3d-d2c0eae47bd3 req-6ad33e5b-c9dc-438b-bc18-1b0ccbf5f32a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-2263bfd6-6d17-4f29-80a6-4de684c71b20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.576 228708 DEBUG nova.network.neutron [req-f4b0d6f8-8dd7-4934-9a3d-d2c0eae47bd3 req-6ad33e5b-c9dc-438b-bc18-1b0ccbf5f32a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Refreshing network info cache for port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.578 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Start _get_guest_xml network_info=[{"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.582 228708 WARNING nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.586 228708 DEBUG nova.virt.libvirt.host [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.587 228708 DEBUG nova.virt.libvirt.host [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.592 228708 DEBUG nova.virt.libvirt.host [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.593 228708 DEBUG nova.virt.libvirt.host [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.593 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.593 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.594 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.594 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.594 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.594 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.594 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.595 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.595 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.595 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.595 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.595 228708 DEBUG nova.virt.hardware [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.598 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:58:48 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 04:58:48 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4089513043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.948 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.966 228708 DEBUG nova.storage.rbd_utils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:58:48 np0005534696 nova_compute[228704]: 2025-11-25 09:58:48.969 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:58:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:49.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:49 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 04:58:49 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/998264127' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.320 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.322 228708 DEBUG nova.virt.libvirt.vif [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:58:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-937118943',display_name='tempest-TestNetworkBasicOps-server-937118943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-937118943',id=8,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGohUWqWT1QHj/KLwuiH7ORJbolYBLSV5Lv6EWTBjqEr4nTrdTXQjXPMJicpI70FfKP9ZkwyZjNVtsGR8bLRkMsHzSWJ0qeT1Bvfk9HdXH5ScIcn7fxUJcNIOAiIqLENMA==',key_name='tempest-TestNetworkBasicOps-1784472797',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-xi0oerxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:58:42Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=2263bfd6-6d17-4f29-80a6-4de684c71b20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.322 228708 DEBUG nova.network.os_vif_util [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.323 228708 DEBUG nova.network.os_vif_util [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.324 228708 DEBUG nova.objects.instance [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2263bfd6-6d17-4f29-80a6-4de684c71b20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.338 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] End _get_guest_xml xml=<domain type="kvm">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  <uuid>2263bfd6-6d17-4f29-80a6-4de684c71b20</uuid>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  <name>instance-00000008</name>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  <memory>131072</memory>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  <vcpu>1</vcpu>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  <metadata>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <nova:name>tempest-TestNetworkBasicOps-server-937118943</nova:name>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <nova:creationTime>2025-11-25 09:58:48</nova:creationTime>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <nova:flavor name="m1.nano">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <nova:memory>128</nova:memory>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <nova:disk>1</nova:disk>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <nova:swap>0</nova:swap>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <nova:vcpus>1</nova:vcpus>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      </nova:flavor>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <nova:owner>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      </nova:owner>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <nova:ports>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <nova:port uuid="9c65e9ae-66c9-44ad-8fb1-f07f28d9b619">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        </nova:port>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      </nova:ports>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    </nova:instance>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  </metadata>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  <sysinfo type="smbios">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <system>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <entry name="manufacturer">RDO</entry>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <entry name="product">OpenStack Compute</entry>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <entry name="serial">2263bfd6-6d17-4f29-80a6-4de684c71b20</entry>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <entry name="uuid">2263bfd6-6d17-4f29-80a6-4de684c71b20</entry>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <entry name="family">Virtual Machine</entry>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    </system>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  </sysinfo>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  <os>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <boot dev="hd"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <smbios mode="sysinfo"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  </os>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  <features>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <acpi/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <apic/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <vmcoreinfo/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  </features>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  <clock offset="utc">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <timer name="hpet" present="no"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  </clock>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  <cpu mode="host-model" match="exact">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  </cpu>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  <devices>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <disk type="network" device="disk">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <driver type="raw" cache="none"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <source protocol="rbd" name="vms/2263bfd6-6d17-4f29-80a6-4de684c71b20_disk">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <host name="192.168.122.102" port="6789"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <host name="192.168.122.101" port="6789"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      </source>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <auth username="openstack">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      </auth>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <target dev="vda" bus="virtio"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <disk type="network" device="cdrom">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <driver type="raw" cache="none"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <source protocol="rbd" name="vms/2263bfd6-6d17-4f29-80a6-4de684c71b20_disk.config">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <host name="192.168.122.100" port="6789"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <host name="192.168.122.102" port="6789"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <host name="192.168.122.101" port="6789"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      </source>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <auth username="openstack">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:        <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      </auth>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <target dev="sda" bus="sata"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    </disk>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <interface type="ethernet">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <mac address="fa:16:3e:dd:53:a1"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <model type="virtio"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <mtu size="1442"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <target dev="tap9c65e9ae-66"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    </interface>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <serial type="pty">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <log file="/var/lib/nova/instances/2263bfd6-6d17-4f29-80a6-4de684c71b20/console.log" append="off"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    </serial>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <video>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <model type="virtio"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    </video>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <input type="tablet" bus="usb"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <rng model="virtio">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <backend model="random">/dev/urandom</backend>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    </rng>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <controller type="usb" index="0"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    <memballoon model="virtio">
Nov 25 04:58:49 np0005534696 nova_compute[228704]:      <stats period="10"/>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:    </memballoon>
Nov 25 04:58:49 np0005534696 nova_compute[228704]:  </devices>
Nov 25 04:58:49 np0005534696 nova_compute[228704]: </domain>
Nov 25 04:58:49 np0005534696 nova_compute[228704]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.339 228708 DEBUG nova.compute.manager [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Preparing to wait for external event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.339 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.339 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.340 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.340 228708 DEBUG nova.virt.libvirt.vif [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T09:58:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-937118943',display_name='tempest-TestNetworkBasicOps-server-937118943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-937118943',id=8,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGohUWqWT1QHj/KLwuiH7ORJbolYBLSV5Lv6EWTBjqEr4nTrdTXQjXPMJicpI70FfKP9ZkwyZjNVtsGR8bLRkMsHzSWJ0qeT1Bvfk9HdXH5ScIcn7fxUJcNIOAiIqLENMA==',key_name='tempest-TestNetworkBasicOps-1784472797',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-xi0oerxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T09:58:42Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=2263bfd6-6d17-4f29-80a6-4de684c71b20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.340 228708 DEBUG nova.network.os_vif_util [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.341 228708 DEBUG nova.network.os_vif_util [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.341 228708 DEBUG os_vif [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.342 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.342 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.342 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.345 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.345 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c65e9ae-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.345 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c65e9ae-66, col_values=(('external_ids', {'iface-id': '9c65e9ae-66c9-44ad-8fb1-f07f28d9b619', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:53:a1', 'vm-uuid': '2263bfd6-6d17-4f29-80a6-4de684c71b20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:58:49 np0005534696 NetworkManager[48892]: <info>  [1764064729.3478] manager: (tap9c65e9ae-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.349 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.352 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.353 228708 INFO os_vif [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66')#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.392 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.392 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.392 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:dd:53:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.393 228708 INFO nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Using config drive#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.408 228708 DEBUG nova.storage.rbd_utils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:58:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:49 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.665 228708 INFO nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Creating config drive at /var/lib/nova/instances/2263bfd6-6d17-4f29-80a6-4de684c71b20/disk.config#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.669 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2263bfd6-6d17-4f29-80a6-4de684c71b20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz9kvgbgo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.786 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2263bfd6-6d17-4f29-80a6-4de684c71b20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz9kvgbgo" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.805 228708 DEBUG nova.storage.rbd_utils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.807 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2263bfd6-6d17-4f29-80a6-4de684c71b20/disk.config 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.891 228708 DEBUG oslo_concurrency.processutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2263bfd6-6d17-4f29-80a6-4de684c71b20/disk.config 2263bfd6-6d17-4f29-80a6-4de684c71b20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.892 228708 INFO nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Deleting local config drive /var/lib/nova/instances/2263bfd6-6d17-4f29-80a6-4de684c71b20/disk.config because it was imported into RBD.#033[00m
Nov 25 04:58:49 np0005534696 systemd[1]: Starting libvirt secret daemon...
Nov 25 04:58:49 np0005534696 systemd[1]: Started libvirt secret daemon.
Nov 25 04:58:49 np0005534696 kernel: tap9c65e9ae-66: entered promiscuous mode
Nov 25 04:58:49 np0005534696 NetworkManager[48892]: <info>  [1764064729.9555] manager: (tap9c65e9ae-66): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Nov 25 04:58:49 np0005534696 ovn_controller[133535]: 2025-11-25T09:58:49Z|00047|binding|INFO|Claiming lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 for this chassis.
Nov 25 04:58:49 np0005534696 ovn_controller[133535]: 2025-11-25T09:58:49Z|00048|binding|INFO|9c65e9ae-66c9-44ad-8fb1-f07f28d9b619: Claiming fa:16:3e:dd:53:a1 10.100.0.9
Nov 25 04:58:49 np0005534696 nova_compute[228704]: 2025-11-25 09:58:49.957 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:49 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:49.973 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:53:a1 10.100.0.9'], port_security=['fa:16:3e:dd:53:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2263bfd6-6d17-4f29-80a6-4de684c71b20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '77421187-f24b-4366-8c59-8fbcf4a8390c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4110b518-ed62-4127-a552-a8ff9779dc23, chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], logical_port=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:58:49 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:49.974 142676 INFO neutron.agent.ovn.metadata.agent [-] Port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 in datapath b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f bound to our chassis#033[00m
Nov 25 04:58:49 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:49.975 142676 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f#033[00m
Nov 25 04:58:49 np0005534696 systemd-udevd[235268]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:58:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:49 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:49.983 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[93fb9c51-ecd8-40a0-bd01-116ebe195684]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:49 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:49.984 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1cfdfd5-81 in ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 04:58:49 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:49.985 232274 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1cfdfd5-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 04:58:49 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:49.985 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[091dda8a-a71e-4465-a831-35d487afb826]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:49 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:49.986 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7991b9-6c32-4197-b4d0-5e1db14547ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:49 np0005534696 systemd-machined[192760]: New machine qemu-2-instance-00000008.
Nov 25 04:58:49 np0005534696 NetworkManager[48892]: <info>  [1764064729.9915] device (tap9c65e9ae-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 04:58:49 np0005534696 NetworkManager[48892]: <info>  [1764064729.9921] device (tap9c65e9ae-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 04:58:49 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:49.993 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce19041-35fd-431c-a132-e76eda572205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:49 np0005534696 systemd[1]: Started Virtual Machine qemu-2-instance-00000008.
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.016 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[798d7475-f2b9-4335-a9d1-c1c554de1afc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.030 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.034 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[0163b36c-8455-4d8c-96fa-7e28f23e45eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.035 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:50 np0005534696 systemd-udevd[235272]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 04:58:50 np0005534696 NetworkManager[48892]: <info>  [1764064730.0386] manager: (tapb1cfdfd5-80): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Nov 25 04:58:50 np0005534696 ovn_controller[133535]: 2025-11-25T09:58:50Z|00049|binding|INFO|Setting lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 ovn-installed in OVS
Nov 25 04:58:50 np0005534696 ovn_controller[133535]: 2025-11-25T09:58:50Z|00050|binding|INFO|Setting lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 up in Southbound
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.040 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.041 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaa5278-5c98-4219-bfbc-ced1fc08ef2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.063 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[589fae80-f65a-4213-ac6d-da873a27aeb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.065 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[3831ff2f-3cf0-4d0d-aa4e-ccdce313d46c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 NetworkManager[48892]: <info>  [1764064730.0798] device (tapb1cfdfd5-80): carrier: link connected
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.083 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[3d2e0282-9771-4ef8-b119-1b68430efa41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.095 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[82849aa3-fc85-4058-a982-ab33dd3f2f78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1cfdfd5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:b0:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 351150, 'reachable_time': 42969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235293, 'error': None, 'target': 'ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.106 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[07617da4-43a0-4247-a4e4-b8f5cf6159cc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:b0c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 351150, 'tstamp': 351150}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235294, 'error': None, 'target': 'ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.115 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[3453e4a3-9a0d-453c-b73b-61d83e44d67b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1cfdfd5-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:b0:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 351150, 'reachable_time': 42969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235295, 'error': None, 'target': 'ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.133 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[c4572a62-3395-4427-836f-d3fdec7be36a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:50.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.168 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d3e1b8-25b1-4ffa-b555-bb8a3d31f88b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.170 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1cfdfd5-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.170 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.170 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1cfdfd5-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.172 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:50 np0005534696 kernel: tapb1cfdfd5-80: entered promiscuous mode
Nov 25 04:58:50 np0005534696 NetworkManager[48892]: <info>  [1764064730.1738] manager: (tapb1cfdfd5-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.174 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1cfdfd5-80, col_values=(('external_ids', {'iface-id': '296dedf0-24b8-4ce5-952e-492b27ffb1cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.175 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.177 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:50 np0005534696 ovn_controller[133535]: 2025-11-25T09:58:50Z|00051|binding|INFO|Releasing lport 296dedf0-24b8-4ce5-952e-492b27ffb1cd from this chassis (sb_readonly=0)
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.177 142676 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.178 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8cc449-51d8-4b77-a8a6-048fb2bde7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.178 142676 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: global
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    log         /dev/log local0 debug
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    log-tag     haproxy-metadata-proxy-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    user        root
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    group       root
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    maxconn     1024
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    pidfile     /var/lib/neutron/external/pids/b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f.pid.haproxy
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    daemon
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: defaults
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    log global
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    mode http
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    option httplog
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    option dontlognull
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    option http-server-close
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    option forwardfor
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    retries                 3
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    timeout http-request    30s
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    timeout connect         30s
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    timeout client          32s
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    timeout server          32s
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    timeout http-keep-alive 30s
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: listen listener
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    bind 169.254.169.254:80
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]:    http-request add-header X-OVN-Network-ID b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 04:58:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:50 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:50 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:50.180 142676 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'env', 'PROCESS_TAG=haproxy-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.191 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:50 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.323 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064730.322934, 2263bfd6-6d17-4f29-80a6-4de684c71b20 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.324 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] VM Started (Lifecycle Event)#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.342 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.347 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064730.3231726, 2263bfd6-6d17-4f29-80a6-4de684c71b20 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.347 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] VM Paused (Lifecycle Event)#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.359 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.362 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.371 228708 DEBUG nova.network.neutron [req-f4b0d6f8-8dd7-4934-9a3d-d2c0eae47bd3 req-6ad33e5b-c9dc-438b-bc18-1b0ccbf5f32a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Updated VIF entry in instance network info cache for port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.372 228708 DEBUG nova.network.neutron [req-f4b0d6f8-8dd7-4934-9a3d-d2c0eae47bd3 req-6ad33e5b-c9dc-438b-bc18-1b0ccbf5f32a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Updating instance_info_cache with network_info: [{"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.382 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.385 228708 DEBUG oslo_concurrency.lockutils [req-f4b0d6f8-8dd7-4934-9a3d-d2c0eae47bd3 req-6ad33e5b-c9dc-438b-bc18-1b0ccbf5f32a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-2263bfd6-6d17-4f29-80a6-4de684c71b20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.462 228708 DEBUG nova.compute.manager [req-7e7e4d24-eaa7-4fa3-a1fe-54b7fc53f283 req-6086eccc-a884-43a5-b3b3-29813f62e9c9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Received event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.463 228708 DEBUG oslo_concurrency.lockutils [req-7e7e4d24-eaa7-4fa3-a1fe-54b7fc53f283 req-6086eccc-a884-43a5-b3b3-29813f62e9c9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.463 228708 DEBUG oslo_concurrency.lockutils [req-7e7e4d24-eaa7-4fa3-a1fe-54b7fc53f283 req-6086eccc-a884-43a5-b3b3-29813f62e9c9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.463 228708 DEBUG oslo_concurrency.lockutils [req-7e7e4d24-eaa7-4fa3-a1fe-54b7fc53f283 req-6086eccc-a884-43a5-b3b3-29813f62e9c9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.464 228708 DEBUG nova.compute.manager [req-7e7e4d24-eaa7-4fa3-a1fe-54b7fc53f283 req-6086eccc-a884-43a5-b3b3-29813f62e9c9 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Processing event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.464 228708 DEBUG nova.compute.manager [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.469 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064730.4693935, 2263bfd6-6d17-4f29-80a6-4de684c71b20 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.469 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] VM Resumed (Lifecycle Event)#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.471 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.473 228708 INFO nova.virt.libvirt.driver [-] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Instance spawned successfully.#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.474 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 04:58:50 np0005534696 podman[235365]: 2025-11-25 09:58:50.475203389 +0000 UTC m=+0.037077523 container create 8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.489 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.494 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.497 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.497 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.498 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.498 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.498 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.499 228708 DEBUG nova.virt.libvirt.driver [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 04:58:50 np0005534696 systemd[1]: Started libpod-conmon-8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719.scope.
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.523 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 04:58:50 np0005534696 systemd[1]: Started libcrun container.
Nov 25 04:58:50 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c460752b6082597c782a0f9336069f5da7ccac2fc52d1bae592a29ae8957916/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 04:58:50 np0005534696 podman[235365]: 2025-11-25 09:58:50.539425772 +0000 UTC m=+0.101299926 container init 8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 04:58:50 np0005534696 podman[235365]: 2025-11-25 09:58:50.544525478 +0000 UTC m=+0.106399602 container start 8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 04:58:50 np0005534696 podman[235365]: 2025-11-25 09:58:50.456374957 +0000 UTC m=+0.018249111 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.553 228708 INFO nova.compute.manager [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Took 8.23 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.554 228708 DEBUG nova.compute.manager [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:58:50 np0005534696 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[235377]: [NOTICE]   (235381) : New worker (235383) forked
Nov 25 04:58:50 np0005534696 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[235377]: [NOTICE]   (235381) : Loading success.
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.604 228708 INFO nova.compute.manager [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Took 9.00 seconds to build instance.#033[00m
Nov 25 04:58:50 np0005534696 nova_compute[228704]: 2025-11-25 09:58:50.614 228708 DEBUG oslo_concurrency.lockutils [None req-623db0d1-53e3-4459-8c58-583b7d21ea5b c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:51 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:58:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:52.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:58:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:52 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:52 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:52 np0005534696 nova_compute[228704]: 2025-11-25 09:58:52.546 228708 DEBUG nova.compute.manager [req-ddf8262f-dc3e-4d90-a25d-41c9a8133cf8 req-d447e110-1a2a-4d0f-9b32-e4312d96181a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Received event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:58:52 np0005534696 nova_compute[228704]: 2025-11-25 09:58:52.547 228708 DEBUG oslo_concurrency.lockutils [req-ddf8262f-dc3e-4d90-a25d-41c9a8133cf8 req-d447e110-1a2a-4d0f-9b32-e4312d96181a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:52 np0005534696 nova_compute[228704]: 2025-11-25 09:58:52.547 228708 DEBUG oslo_concurrency.lockutils [req-ddf8262f-dc3e-4d90-a25d-41c9a8133cf8 req-d447e110-1a2a-4d0f-9b32-e4312d96181a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:52 np0005534696 nova_compute[228704]: 2025-11-25 09:58:52.547 228708 DEBUG oslo_concurrency.lockutils [req-ddf8262f-dc3e-4d90-a25d-41c9a8133cf8 req-d447e110-1a2a-4d0f-9b32-e4312d96181a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:52 np0005534696 nova_compute[228704]: 2025-11-25 09:58:52.548 228708 DEBUG nova.compute.manager [req-ddf8262f-dc3e-4d90-a25d-41c9a8133cf8 req-d447e110-1a2a-4d0f-9b32-e4312d96181a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] No waiting events found dispatching network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:58:52 np0005534696 nova_compute[228704]: 2025-11-25 09:58:52.549 228708 WARNING nova.compute.manager [req-ddf8262f-dc3e-4d90-a25d-41c9a8133cf8 req-d447e110-1a2a-4d0f-9b32-e4312d96181a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Received unexpected event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 for instance with vm_state active and task_state None.#033[00m
Nov 25 04:58:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:53.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:53 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:54.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:54 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:54 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.349 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.567 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:54 np0005534696 NetworkManager[48892]: <info>  [1764064734.5676] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Nov 25 04:58:54 np0005534696 NetworkManager[48892]: <info>  [1764064734.5683] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 25 04:58:54 np0005534696 ovn_controller[133535]: 2025-11-25T09:58:54Z|00052|binding|INFO|Releasing lport 296dedf0-24b8-4ce5-952e-492b27ffb1cd from this chassis (sb_readonly=0)
Nov 25 04:58:54 np0005534696 ovn_controller[133535]: 2025-11-25T09:58:54Z|00053|binding|INFO|Releasing lport 296dedf0-24b8-4ce5-952e-492b27ffb1cd from this chassis (sb_readonly=0)
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.606 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.610 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.817 228708 DEBUG nova.compute.manager [req-f3bdb14b-b1fa-4122-88e6-e2c30de97fa7 req-7674f60b-d6c3-4763-a273-446bfab5b35a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Received event network-changed-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.818 228708 DEBUG nova.compute.manager [req-f3bdb14b-b1fa-4122-88e6-e2c30de97fa7 req-7674f60b-d6c3-4763-a273-446bfab5b35a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Refreshing instance network info cache due to event network-changed-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.818 228708 DEBUG oslo_concurrency.lockutils [req-f3bdb14b-b1fa-4122-88e6-e2c30de97fa7 req-7674f60b-d6c3-4763-a273-446bfab5b35a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-2263bfd6-6d17-4f29-80a6-4de684c71b20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.818 228708 DEBUG oslo_concurrency.lockutils [req-f3bdb14b-b1fa-4122-88e6-e2c30de97fa7 req-7674f60b-d6c3-4763-a273-446bfab5b35a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-2263bfd6-6d17-4f29-80a6-4de684c71b20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.819 228708 DEBUG nova.network.neutron [req-f3bdb14b-b1fa-4122-88e6-e2c30de97fa7 req-7674f60b-d6c3-4763-a273-446bfab5b35a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Refreshing network info cache for port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.960 228708 DEBUG oslo_concurrency.lockutils [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "2263bfd6-6d17-4f29-80a6-4de684c71b20" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.960 228708 DEBUG oslo_concurrency.lockutils [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.960 228708 DEBUG oslo_concurrency.lockutils [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.961 228708 DEBUG oslo_concurrency.lockutils [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.961 228708 DEBUG oslo_concurrency.lockutils [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.962 228708 INFO nova.compute.manager [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Terminating instance#033[00m
Nov 25 04:58:54 np0005534696 nova_compute[228704]: 2025-11-25 09:58:54.963 228708 DEBUG nova.compute.manager [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 04:58:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:54 np0005534696 kernel: tap9c65e9ae-66 (unregistering): left promiscuous mode
Nov 25 04:58:54 np0005534696 NetworkManager[48892]: <info>  [1764064734.9959] device (tap9c65e9ae-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.005 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.007 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:55 np0005534696 ovn_controller[133535]: 2025-11-25T09:58:55Z|00054|binding|INFO|Releasing lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 from this chassis (sb_readonly=0)
Nov 25 04:58:55 np0005534696 ovn_controller[133535]: 2025-11-25T09:58:55Z|00055|binding|INFO|Setting lport 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 down in Southbound
Nov 25 04:58:55 np0005534696 ovn_controller[133535]: 2025-11-25T09:58:55Z|00056|binding|INFO|Removing iface tap9c65e9ae-66 ovn-installed in OVS
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.017 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:53:a1 10.100.0.9'], port_security=['fa:16:3e:dd:53:a1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2263bfd6-6d17-4f29-80a6-4de684c71b20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-558139589', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '77421187-f24b-4366-8c59-8fbcf4a8390c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.197'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4110b518-ed62-4127-a552-a8ff9779dc23, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], logical_port=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.020 142676 INFO neutron.agent.ovn.metadata.agent [-] Port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 in datapath b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f unbound from our chassis#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.021 142676 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.023 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.024 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[de613c31-038b-4588-bb84-5b20561ddc02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.025 142676 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f namespace which is not needed anymore#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.033 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:55 np0005534696 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 25 04:58:55 np0005534696 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Consumed 4.879s CPU time.
Nov 25 04:58:55 np0005534696 systemd-machined[192760]: Machine qemu-2-instance-00000008 terminated.
Nov 25 04:58:55 np0005534696 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[235377]: [NOTICE]   (235381) : haproxy version is 2.8.14-c23fe91
Nov 25 04:58:55 np0005534696 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[235377]: [NOTICE]   (235381) : path to executable is /usr/sbin/haproxy
Nov 25 04:58:55 np0005534696 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[235377]: [ALERT]    (235381) : Current worker (235383) exited with code 143 (Terminated)
Nov 25 04:58:55 np0005534696 neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f[235377]: [WARNING]  (235381) : All workers exited. Exiting... (0)
Nov 25 04:58:55 np0005534696 systemd[1]: libpod-8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719.scope: Deactivated successfully.
Nov 25 04:58:55 np0005534696 podman[235414]: 2025-11-25 09:58:55.131263236 +0000 UTC m=+0.036192894 container died 8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:58:55 np0005534696 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719-userdata-shm.mount: Deactivated successfully.
Nov 25 04:58:55 np0005534696 systemd[1]: var-lib-containers-storage-overlay-6c460752b6082597c782a0f9336069f5da7ccac2fc52d1bae592a29ae8957916-merged.mount: Deactivated successfully.
Nov 25 04:58:55 np0005534696 podman[235414]: 2025-11-25 09:58:55.15341772 +0000 UTC m=+0.058347337 container cleanup 8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 04:58:55 np0005534696 systemd[1]: libpod-conmon-8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719.scope: Deactivated successfully.
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.189 228708 INFO nova.virt.libvirt.driver [-] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Instance destroyed successfully.#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.189 228708 DEBUG nova.objects.instance [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid 2263bfd6-6d17-4f29-80a6-4de684c71b20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 04:58:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:55.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:55 np0005534696 podman[235437]: 2025-11-25 09:58:55.202810231 +0000 UTC m=+0.033876196 container remove 8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.201 228708 DEBUG nova.virt.libvirt.vif [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T09:58:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-937118943',display_name='tempest-TestNetworkBasicOps-server-937118943',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-937118943',id=8,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGohUWqWT1QHj/KLwuiH7ORJbolYBLSV5Lv6EWTBjqEr4nTrdTXQjXPMJicpI70FfKP9ZkwyZjNVtsGR8bLRkMsHzSWJ0qeT1Bvfk9HdXH5ScIcn7fxUJcNIOAiIqLENMA==',key_name='tempest-TestNetworkBasicOps-1784472797',keypairs=<?>,launch_index=0,launched_at=2025-11-25T09:58:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-xi0oerxd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T09:58:50Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=2263bfd6-6d17-4f29-80a6-4de684c71b20,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.201 228708 DEBUG nova.network.os_vif_util [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.202 228708 DEBUG nova.network.os_vif_util [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.202 228708 DEBUG os_vif [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.203 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.204 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c65e9ae-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.207 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.208 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.208 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[873a7b7b-3a8d-4863-8e92-72e92e36a553]: (4, ('Tue Nov 25 09:58:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f (8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719)\n8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719\nTue Nov 25 09:58:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f (8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719)\n8cb43c24b9f89130c26564e3daae751485fe8c6ab5efc161955cc84353f9c719\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.209 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[af2bb08d-c097-4152-b81a-941a940148f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.210 228708 INFO os_vif [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:53:a1,bridge_name='br-int',has_traffic_filtering=True,id=9c65e9ae-66c9-44ad-8fb1-f07f28d9b619,network=Network(b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9c65e9ae-66')#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.211 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1cfdfd5-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:58:55 np0005534696 kernel: tapb1cfdfd5-80: left promiscuous mode
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.218 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[f6067912-340e-4584-9cd2-b07f263a4cf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.224 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.230 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.233 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[83e9161d-7227-4fc5-a2e8-5f693e02f010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.234 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[1376a5a8-82f5-459c-b3d7-9c17a2a8df21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.246 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[9062868b-326d-479c-b3bc-f1746bbbf46f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 351145, 'reachable_time': 32496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235476, 'error': None, 'target': 'ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.247 142787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 04:58:55 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:58:55.247 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[a37801c5-9b7f-4b1c-8646-b876a838de69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 04:58:55 np0005534696 systemd[1]: run-netns-ovnmeta\x2db1cfdfd5\x2d8c3e\x2d495c\x2da4e8\x2d9aea8f1d5f9f.mount: Deactivated successfully.
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.373 228708 INFO nova.virt.libvirt.driver [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Deleting instance files /var/lib/nova/instances/2263bfd6-6d17-4f29-80a6-4de684c71b20_del#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.374 228708 INFO nova.virt.libvirt.driver [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Deletion of /var/lib/nova/instances/2263bfd6-6d17-4f29-80a6-4de684c71b20_del complete#033[00m
Nov 25 04:58:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:55 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9800052a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.598 228708 INFO nova.compute.manager [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.599 228708 DEBUG oslo.service.loopingcall [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.599 228708 DEBUG nova.compute.manager [-] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 04:58:55 np0005534696 nova_compute[228704]: 2025-11-25 09:58:55.599 228708 DEBUG nova.network.neutron [-] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 04:58:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:58:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:56.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:56 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:56 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:56 np0005534696 podman[235482]: 2025-11-25 09:58:56.333793187 +0000 UTC m=+0.043590903 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.337 228708 DEBUG nova.network.neutron [req-f3bdb14b-b1fa-4122-88e6-e2c30de97fa7 req-7674f60b-d6c3-4763-a273-446bfab5b35a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Updated VIF entry in instance network info cache for port 9c65e9ae-66c9-44ad-8fb1-f07f28d9b619. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.337 228708 DEBUG nova.network.neutron [req-f3bdb14b-b1fa-4122-88e6-e2c30de97fa7 req-7674f60b-d6c3-4763-a273-446bfab5b35a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Updating instance_info_cache with network_info: [{"id": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "address": "fa:16:3e:dd:53:a1", "network": {"id": "b1cfdfd5-8c3e-495c-a4e8-9aea8f1d5f9f", "bridge": "br-int", "label": "tempest-network-smoke--1265691061", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c65e9ae-66", "ovs_interfaceid": "9c65e9ae-66c9-44ad-8fb1-f07f28d9b619", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.351 228708 DEBUG oslo_concurrency.lockutils [req-f3bdb14b-b1fa-4122-88e6-e2c30de97fa7 req-7674f60b-d6c3-4763-a273-446bfab5b35a c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-2263bfd6-6d17-4f29-80a6-4de684c71b20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.700 228708 DEBUG nova.network.neutron [-] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.716 228708 INFO nova.compute.manager [-] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Took 1.12 seconds to deallocate network for instance.#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.757 228708 DEBUG oslo_concurrency.lockutils [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.757 228708 DEBUG oslo_concurrency.lockutils [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.809 228708 DEBUG oslo_concurrency.processutils [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.893 228708 DEBUG nova.compute.manager [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Received event network-vif-unplugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.893 228708 DEBUG oslo_concurrency.lockutils [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.894 228708 DEBUG oslo_concurrency.lockutils [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.899 228708 DEBUG oslo_concurrency.lockutils [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.900 228708 DEBUG nova.compute.manager [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] No waiting events found dispatching network-vif-unplugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.900 228708 WARNING nova.compute.manager [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Received unexpected event network-vif-unplugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.900 228708 DEBUG nova.compute.manager [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Received event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.900 228708 DEBUG oslo_concurrency.lockutils [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.902 228708 DEBUG oslo_concurrency.lockutils [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.902 228708 DEBUG oslo_concurrency.lockutils [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.902 228708 DEBUG nova.compute.manager [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] No waiting events found dispatching network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 04:58:56 np0005534696 nova_compute[228704]: 2025-11-25 09:58:56.902 228708 WARNING nova.compute.manager [req-368aa1a0-ee28-4c16-8b67-730b90acfa26 req-3f6363a9-2f32-4106-b3f0-e9f495e54e2b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Received unexpected event network-vif-plugged-9c65e9ae-66c9-44ad-8fb1-f07f28d9b619 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 04:58:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:57 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:58:57 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1528136379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:58:57 np0005534696 nova_compute[228704]: 2025-11-25 09:58:57.151 228708 DEBUG oslo_concurrency.processutils [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:58:57 np0005534696 nova_compute[228704]: 2025-11-25 09:58:57.155 228708 DEBUG nova.compute.provider_tree [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:58:57 np0005534696 nova_compute[228704]: 2025-11-25 09:58:57.167 228708 DEBUG nova.scheduler.client.report [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:58:57 np0005534696 nova_compute[228704]: 2025-11-25 09:58:57.180 228708 DEBUG oslo_concurrency.lockutils [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:57.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:57 np0005534696 nova_compute[228704]: 2025-11-25 09:58:57.201 228708 INFO nova.scheduler.client.report [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance 2263bfd6-6d17-4f29-80a6-4de684c71b20#033[00m
Nov 25 04:58:57 np0005534696 nova_compute[228704]: 2025-11-25 09:58:57.263 228708 DEBUG oslo_concurrency.lockutils [None req-d77852ca-068e-4d18-96b6-3d8453043da4 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "2263bfd6-6d17-4f29-80a6-4de684c71b20" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:58:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:57 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:58:58.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:58 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9800052a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:58 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:58:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:58:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:58:59.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:58:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:58:59 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:58:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:58:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:58:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:58:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:00 np0005534696 nova_compute[228704]: 2025-11-25 09:59:00.034 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:00.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:59:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff9780fde40 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:59:00 np0005534696 nova_compute[228704]: 2025-11-25 09:59:00.205 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:59:00 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff980005fb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:59:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:00 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:01.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:59:01 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:59:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:01 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:02.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:59:02 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff924066420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 25 04:59:02 np0005534696 kernel: ganesha.nfsd[234525]: segfault at 50 ip 00007ff987a2632e sp 00007ff93b7fd210 error 4 in libntirpc.so.5.8[7ff987a0b000+2c000] likely on CPU 2 (core 0, socket 2)
Nov 25 04:59:02 np0005534696 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Nov 25 04:59:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[231094]: 25/11/2025 09:59:02 : epoch 69257c59 : compute-2 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8dc001320 fd 48 proxy ignored for local
Nov 25 04:59:02 np0005534696 systemd[1]: Started Process Core Dump (PID 235527/UID 0).
Nov 25 04:59:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:02 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:03.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:03 np0005534696 systemd-coredump[235528]: Process 231098 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 78:#012#0  0x00007ff987a2632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Nov 25 04:59:03 np0005534696 systemd[1]: systemd-coredump@7-235527-0.service: Deactivated successfully.
Nov 25 04:59:03 np0005534696 systemd[1]: systemd-coredump@7-235527-0.service: Consumed 1.021s CPU time.
Nov 25 04:59:03 np0005534696 podman[235557]: 2025-11-25 09:59:03.398836157 +0000 UTC m=+0.022186114 container died 4b16e2c7ab5313d4b0a2c0091ef1011873f465d6fd9c16f3db1253b399dcde09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:59:03 np0005534696 systemd[1]: var-lib-containers-storage-overlay-70cb4f315f28fde409f85929b64d1b00f0e2c63a6173f193f4f33fd3fddef6cb-merged.mount: Deactivated successfully.
Nov 25 04:59:03 np0005534696 podman[235557]: 2025-11-25 09:59:03.428810115 +0000 UTC m=+0.052160051 container remove 4b16e2c7ab5313d4b0a2c0091ef1011873f465d6fd9c16f3db1253b399dcde09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 25 04:59:03 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Main process exited, code=exited, status=139/n/a
Nov 25 04:59:03 np0005534696 podman[235556]: 2025-11-25 09:59:03.445402711 +0000 UTC m=+0.069961506 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 25 04:59:03 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Failed with result 'exit-code'.
Nov 25 04:59:03 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.434s CPU time.
Nov 25 04:59:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:03 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:04.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:04 np0005534696 nova_compute[228704]: 2025-11-25 09:59:04.781 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:59:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:04 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:05 np0005534696 nova_compute[228704]: 2025-11-25 09:59:05.036 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:05.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:05 np0005534696 nova_compute[228704]: 2025-11-25 09:59:05.206 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:59:05.353 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:59:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:59:05.354 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:59:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:59:05.354 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:59:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:05 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:06.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:06 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:07.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:07 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:08.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [WARNING] 328/095908 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 25 04:59:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-nfs-cephfs-compute-2-flyakz[87914]: [ALERT] 328/095908 (4) : backend 'backend' has no server available!
Nov 25 04:59:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:08 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:09.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:09 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:10 np0005534696 nova_compute[228704]: 2025-11-25 09:59:10.038 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:10.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:10 np0005534696 nova_compute[228704]: 2025-11-25 09:59:10.183 228708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064735.1818955, 2263bfd6-6d17-4f29-80a6-4de684c71b20 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 04:59:10 np0005534696 nova_compute[228704]: 2025-11-25 09:59:10.183 228708 INFO nova.compute.manager [-] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] VM Stopped (Lifecycle Event)#033[00m
Nov 25 04:59:10 np0005534696 nova_compute[228704]: 2025-11-25 09:59:10.206 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:10 np0005534696 nova_compute[228704]: 2025-11-25 09:59:10.226 228708 DEBUG nova.compute.manager [None req-6f1150fe-eea5-435a-95d4-c6b8c8c92883 - - - - - -] [instance: 2263bfd6-6d17-4f29-80a6-4de684c71b20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 04:59:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:10 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:11.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:11 np0005534696 podman[235625]: 2025-11-25 09:59:11.359143498 +0000 UTC m=+0.069081849 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 25 04:59:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:11 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:59:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:12.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:59:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:12 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:13.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:13 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Scheduled restart job, restart counter is at 8.
Nov 25 04:59:13 np0005534696 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:59:13 np0005534696 systemd[1]: ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90@nfs.cephfs.1.0.compute-2.jouchy.service: Consumed 1.434s CPU time.
Nov 25 04:59:13 np0005534696 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90...
Nov 25 04:59:13 np0005534696 podman[235732]: 2025-11-25 09:59:13.91967299 +0000 UTC m=+0.029308563 container create afc8a7f7775bc1eadf7be781d688d2da8cb2b20920163da5aa215e0ea842a9a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:59:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd3bb75d1abac57179c7ad2b39a6f69fbf16bfff344a043f72218317d149042/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 25 04:59:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd3bb75d1abac57179c7ad2b39a6f69fbf16bfff344a043f72218317d149042/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 04:59:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd3bb75d1abac57179c7ad2b39a6f69fbf16bfff344a043f72218317d149042/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:59:13 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd3bb75d1abac57179c7ad2b39a6f69fbf16bfff344a043f72218317d149042/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.jouchy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 25 04:59:13 np0005534696 podman[235732]: 2025-11-25 09:59:13.965683564 +0000 UTC m=+0.075319158 container init afc8a7f7775bc1eadf7be781d688d2da8cb2b20920163da5aa215e0ea842a9a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 04:59:13 np0005534696 podman[235732]: 2025-11-25 09:59:13.970212513 +0000 UTC m=+0.079848087 container start afc8a7f7775bc1eadf7be781d688d2da8cb2b20920163da5aa215e0ea842a9a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 04:59:13 np0005534696 bash[235732]: afc8a7f7775bc1eadf7be781d688d2da8cb2b20920163da5aa215e0ea842a9a0
Nov 25 04:59:13 np0005534696 podman[235732]: 2025-11-25 09:59:13.90774561 +0000 UTC m=+0.017381204 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 04:59:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 25 04:59:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 25 04:59:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:13 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:13 np0005534696 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.jouchy for af1c9ae3-08d7-5547-a53d-2cccf7c6ef90.
Nov 25 04:59:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 25 04:59:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 25 04:59:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 25 04:59:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 25 04:59:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 25 04:59:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:59:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:59:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:14.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:59:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 04:59:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:59:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:59:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 04:59:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:14 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:15 np0005534696 nova_compute[228704]: 2025-11-25 09:59:15.039 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:15 np0005534696 nova_compute[228704]: 2025-11-25 09:59:15.207 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:15 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:16.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:16 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:17.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:17 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:18.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:18 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:59:18.374 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 04:59:18 np0005534696 nova_compute[228704]: 2025-11-25 09:59:18.375 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:18 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:59:18.375 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 04:59:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:59:18 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 04:59:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:18 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:19.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:19 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.040 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:59:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:59:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 04:59:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.208 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.383 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.383 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.383 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.383 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.383 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.642 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.725 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.744 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:59:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:20 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.992 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.993 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4926MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.994 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 04:59:20 np0005534696 nova_compute[228704]: 2025-11-25 09:59:20.994 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.053 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.053 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.077 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing inventories for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.091 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating ProviderTree inventory for provider e8eea1e0-1833-4152-af65-8b442fac3e0d from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.091 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating inventory in ProviderTree for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.108 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing aggregate associations for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.130 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing trait associations for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d, traits: HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX512VAES,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.141 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 04:59:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:21.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:21 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 04:59:21 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/253252585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.488 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.492 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.503 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.514 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 04:59:21 np0005534696 nova_compute[228704]: 2025-11-25 09:59:21.515 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 04:59:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:21 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:22.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:22 np0005534696 nova_compute[228704]: 2025-11-25 09:59:22.515 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:59:22 np0005534696 nova_compute[228704]: 2025-11-25 09:59:22.517 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:59:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:22 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:23.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:23 np0005534696 nova_compute[228704]: 2025-11-25 09:59:23.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:59:23 np0005534696 nova_compute[228704]: 2025-11-25 09:59:23.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 04:59:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:23 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:24 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:24 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:59:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:24 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:59:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:24 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:59:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 04:59:25 np0005534696 nova_compute[228704]: 2025-11-25 09:59:25.042 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:25 np0005534696 nova_compute[228704]: 2025-11-25 09:59:25.209 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:25.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:25 np0005534696 nova_compute[228704]: 2025-11-25 09:59:25.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:59:25 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 09:59:25.377 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 04:59:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:25 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:26.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:26 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:27.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:27 np0005534696 podman[235924]: 2025-11-25 09:59:27.340315052 +0000 UTC m=+0.049845486 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 04:59:27 np0005534696 nova_compute[228704]: 2025-11-25 09:59:27.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:59:27 np0005534696 nova_compute[228704]: 2025-11-25 09:59:27.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 04:59:27 np0005534696 nova_compute[228704]: 2025-11-25 09:59:27.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 04:59:27 np0005534696 nova_compute[228704]: 2025-11-25 09:59:27.370 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 04:59:27 np0005534696 nova_compute[228704]: 2025-11-25 09:59:27.370 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:59:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:27 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:28.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:28 np0005534696 nova_compute[228704]: 2025-11-25 09:59:28.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:59:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:28 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:29.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:29 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:59:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:59:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:59:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 04:59:30 np0005534696 nova_compute[228704]: 2025-11-25 09:59:30.043 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:30.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:30 np0005534696 nova_compute[228704]: 2025-11-25 09:59:30.211 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:30 np0005534696 nova_compute[228704]: 2025-11-25 09:59:30.352 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 04:59:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:30 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:31.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:31 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:32.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:32 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:33.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:33 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:59:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:34.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:59:34 np0005534696 podman[235947]: 2025-11-25 09:59:34.349066867 +0000 UTC m=+0.056518499 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 04:59:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:34 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:59:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:59:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:59:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:35 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 04:59:35 np0005534696 nova_compute[228704]: 2025-11-25 09:59:35.046 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:35 np0005534696 nova_compute[228704]: 2025-11-25 09:59:35.213 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:35.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:35 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:36.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:36 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:37.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:37 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:38.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:38 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:39.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:39 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:59:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:59:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:59:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 04:59:40 np0005534696 nova_compute[228704]: 2025-11-25 09:59:40.048 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:40 np0005534696 nova_compute[228704]: 2025-11-25 09:59:40.214 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:40.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:40 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:41.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:41 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:42.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:42 np0005534696 podman[235978]: 2025-11-25 09:59:42.337837927 +0000 UTC m=+0.041781813 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 04:59:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:42 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:43.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:43 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:59:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:44.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:59:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:44 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:59:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:59:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:59:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 04:59:45 np0005534696 nova_compute[228704]: 2025-11-25 09:59:45.049 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:45 np0005534696 nova_compute[228704]: 2025-11-25 09:59:45.215 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:45.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:45 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:59:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:46.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:59:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:46 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:47.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:47 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:59:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:48.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:59:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:48 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:49.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:49 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:59:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:59:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:59:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 04:59:50 np0005534696 nova_compute[228704]: 2025-11-25 09:59:50.052 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:50 np0005534696 nova_compute[228704]: 2025-11-25 09:59:50.216 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 04:59:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:50.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 04:59:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:50 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:51.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:51 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:52.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:52 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:53.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:53 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:54.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:54 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 04:59:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 04:59:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 04:59:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 04:59:55 np0005534696 nova_compute[228704]: 2025-11-25 09:59:55.055 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:55 np0005534696 nova_compute[228704]: 2025-11-25 09:59:55.217 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 04:59:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:55.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:55 np0005534696 ovn_controller[133535]: 2025-11-25T09:59:55Z|00057|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Nov 25 04:59:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 04:59:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:55 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:56.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:56 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:57.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:57 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 04:59:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:09:59:58.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 04:59:58 np0005534696 podman[236038]: 2025-11-25 09:59:58.337589276 +0000 UTC m=+0.044672313 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 04:59:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:58 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 04:59:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 04:59:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:09:59:59.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 04:59:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 09:59:59 2025: (VI_0) received an invalid passwd!
Nov 25 04:59:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 09:59:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 09:59:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:00 np0005534696 nova_compute[228704]: 2025-11-25 10:00:00.057 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:00 np0005534696 nova_compute[228704]: 2025-11-25 10:00:00.218 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:00.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:00 np0005534696 ceph-mon[75508]: overall HEALTH_OK
Nov 25 05:00:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:01.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:02.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:00:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:03.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:00:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:00:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:04.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:00:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:05 np0005534696 nova_compute[228704]: 2025-11-25 10:00:05.059 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:05 np0005534696 nova_compute[228704]: 2025-11-25 10:00:05.219 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:05.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:05.354 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:00:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:05.354 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:00:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:05.355 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:00:05 np0005534696 podman[236086]: 2025-11-25 10:00:05.375246035 +0000 UTC m=+0.084798551 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 05:00:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:06.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:07.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:08.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:09.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:10 np0005534696 nova_compute[228704]: 2025-11-25 10:00:10.061 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:10 np0005534696 nova_compute[228704]: 2025-11-25 10:00:10.220 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:10.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:11.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:12.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:13.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:13 np0005534696 podman[236117]: 2025-11-25 10:00:13.33142484 +0000 UTC m=+0.036155484 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 05:00:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:00:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:14.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:00:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:15 np0005534696 nova_compute[228704]: 2025-11-25 10:00:15.062 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:15 np0005534696 nova_compute[228704]: 2025-11-25 10:00:15.221 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:00:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:15.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:00:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:16.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:17.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:00:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:18.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:00:18 np0005534696 nova_compute[228704]: 2025-11-25 10:00:18.352 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:00:18 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:18.461 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 05:00:18 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:18.462 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 05:00:18 np0005534696 nova_compute[228704]: 2025-11-25 10:00:18.462 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:18 np0005534696 ceph-osd[77914]: bluestore.MempoolThread fragmentation_score=0.000501 took=0.000042s
Nov 25 05:00:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:19.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.064 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.222 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:20.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.375 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.375 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.376 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.376 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.376 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:00:20 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:00:20 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:00:20 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 05:00:20 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:00:20 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:00:20 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 05:00:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:00:20 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4117408647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.713 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:00:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.917 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.918 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4944MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.918 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.919 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:00:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.994 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 05:00:20 np0005534696 nova_compute[228704]: 2025-11-25 10:00:20.995 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 05:00:21 np0005534696 nova_compute[228704]: 2025-11-25 10:00:21.013 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:00:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:21.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:21 np0005534696 nova_compute[228704]: 2025-11-25 10:00:21.355 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:00:21 np0005534696 nova_compute[228704]: 2025-11-25 10:00:21.360 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:00:21 np0005534696 nova_compute[228704]: 2025-11-25 10:00:21.376 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:00:21 np0005534696 nova_compute[228704]: 2025-11-25 10:00:21.377 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 05:00:21 np0005534696 nova_compute[228704]: 2025-11-25 10:00:21.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:00:21 np0005534696 ceph-mon[75508]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 25 05:00:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:22.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:23.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:23 np0005534696 nova_compute[228704]: 2025-11-25 10:00:23.377 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:00:23 np0005534696 nova_compute[228704]: 2025-11-25 10:00:23.378 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:00:23 np0005534696 nova_compute[228704]: 2025-11-25 10:00:23.378 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:00:23 np0005534696 nova_compute[228704]: 2025-11-25 10:00:23.378 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 05:00:23 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:00:23 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:00:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:24.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:24 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:24.464 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:00:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:25 np0005534696 nova_compute[228704]: 2025-11-25 10:00:25.065 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:25 np0005534696 nova_compute[228704]: 2025-11-25 10:00:25.223 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:25.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:25 np0005534696 nova_compute[228704]: 2025-11-25 10:00:25.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:00:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:00:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:26.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:00:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:27.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:28.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:28 np0005534696 nova_compute[228704]: 2025-11-25 10:00:28.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:00:28 np0005534696 nova_compute[228704]: 2025-11-25 10:00:28.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 05:00:28 np0005534696 nova_compute[228704]: 2025-11-25 10:00:28.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 05:00:28 np0005534696 nova_compute[228704]: 2025-11-25 10:00:28.369 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 05:00:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:29.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:29 np0005534696 podman[236323]: 2025-11-25 10:00:29.328146982 +0000 UTC m=+0.035226391 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 05:00:29 np0005534696 nova_compute[228704]: 2025-11-25 10:00:29.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:00:29 np0005534696 nova_compute[228704]: 2025-11-25 10:00:29.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:00:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:30 np0005534696 nova_compute[228704]: 2025-11-25 10:00:30.068 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:30 np0005534696 nova_compute[228704]: 2025-11-25 10:00:30.223 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:30.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:30 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:00:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:31.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:31 np0005534696 nova_compute[228704]: 2025-11-25 10:00:31.351 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:00:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:32.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:33.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:34.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:35 np0005534696 nova_compute[228704]: 2025-11-25 10:00:35.069 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:35 np0005534696 nova_compute[228704]: 2025-11-25 10:00:35.224 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:35.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:36.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:36 np0005534696 podman[236346]: 2025-11-25 10:00:36.346501796 +0000 UTC m=+0.056230615 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 05:00:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:37.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.450 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.450 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.462 228708 DEBUG nova.compute.manager [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.522 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.522 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.527 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.527 228708 INFO nova.compute.claims [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.605 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:00:37 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:00:37 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/928370122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.943 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.946 228708 DEBUG nova.compute.provider_tree [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.964 228708 DEBUG nova.scheduler.client.report [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.988 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:00:37 np0005534696 nova_compute[228704]: 2025-11-25 10:00:37.988 228708 DEBUG nova.compute.manager [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 05:00:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:37 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:37 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:37 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:37 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.032 228708 DEBUG nova.compute.manager [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.032 228708 DEBUG nova.network.neutron [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.045 228708 INFO nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.054 228708 DEBUG nova.compute.manager [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.111 228708 DEBUG nova.compute.manager [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.112 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.112 228708 INFO nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Creating image(s)#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.128 228708 DEBUG nova.storage.rbd_utils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.144 228708 DEBUG nova.storage.rbd_utils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.161 228708 DEBUG nova.storage.rbd_utils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.163 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.209 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.209 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.210 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.210 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.227 228708 DEBUG nova.storage.rbd_utils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.231 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:00:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:38.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.356 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.378 228708 DEBUG nova.policy [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.403 228708 DEBUG nova.storage.rbd_utils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.453 228708 DEBUG nova.objects.instance [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.465 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.466 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Ensure instance console log exists: /var/lib/nova/instances/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.466 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.466 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.467 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:00:38 np0005534696 nova_compute[228704]: 2025-11-25 10:00:38.931 228708 DEBUG nova.network.neutron [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Successfully created port: ed193a65-d9ea-43d6-9637-53f0924c6cb8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 05:00:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:39.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:39 np0005534696 nova_compute[228704]: 2025-11-25 10:00:39.619 228708 DEBUG nova.network.neutron [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Successfully updated port: ed193a65-d9ea-43d6-9637-53f0924c6cb8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 05:00:39 np0005534696 nova_compute[228704]: 2025-11-25 10:00:39.630 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 05:00:39 np0005534696 nova_compute[228704]: 2025-11-25 10:00:39.630 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 05:00:39 np0005534696 nova_compute[228704]: 2025-11-25 10:00:39.630 228708 DEBUG nova.network.neutron [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 05:00:39 np0005534696 nova_compute[228704]: 2025-11-25 10:00:39.724 228708 DEBUG nova.compute.manager [req-d7bf4498-4cd4-44f1-b65c-b58e88d18f75 req-ea27ebb2-fef5-4107-9f1e-60e6c03d16e8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Received event network-changed-ed193a65-d9ea-43d6-9637-53f0924c6cb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:00:39 np0005534696 nova_compute[228704]: 2025-11-25 10:00:39.724 228708 DEBUG nova.compute.manager [req-d7bf4498-4cd4-44f1-b65c-b58e88d18f75 req-ea27ebb2-fef5-4107-9f1e-60e6c03d16e8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Refreshing instance network info cache due to event network-changed-ed193a65-d9ea-43d6-9637-53f0924c6cb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 05:00:39 np0005534696 nova_compute[228704]: 2025-11-25 10:00:39.724 228708 DEBUG oslo_concurrency.lockutils [req-d7bf4498-4cd4-44f1-b65c-b58e88d18f75 req-ea27ebb2-fef5-4107-9f1e-60e6c03d16e8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 05:00:39 np0005534696 nova_compute[228704]: 2025-11-25 10:00:39.771 228708 DEBUG nova.network.neutron [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 05:00:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.074 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.226 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:40.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.311 228708 DEBUG nova.network.neutron [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Updating instance_info_cache with network_info: [{"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.325 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.325 228708 DEBUG nova.compute.manager [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Instance network_info: |[{"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.325 228708 DEBUG oslo_concurrency.lockutils [req-d7bf4498-4cd4-44f1-b65c-b58e88d18f75 req-ea27ebb2-fef5-4107-9f1e-60e6c03d16e8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.325 228708 DEBUG nova.network.neutron [req-d7bf4498-4cd4-44f1-b65c-b58e88d18f75 req-ea27ebb2-fef5-4107-9f1e-60e6c03d16e8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Refreshing network info cache for port ed193a65-d9ea-43d6-9637-53f0924c6cb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.327 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Start _get_guest_xml network_info=[{"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.330 228708 WARNING nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.334 228708 DEBUG nova.virt.libvirt.host [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.334 228708 DEBUG nova.virt.libvirt.host [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.338 228708 DEBUG nova.virt.libvirt.host [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.339 228708 DEBUG nova.virt.libvirt.host [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.339 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.339 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.340 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.340 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.340 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.340 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.340 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.341 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.341 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.341 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.341 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.341 228708 DEBUG nova.virt.hardware [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.343 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:00:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 05:00:40 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2516742035' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.683 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.701 228708 DEBUG nova.storage.rbd_utils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:00:40 np0005534696 nova_compute[228704]: 2025-11-25 10:00:40.704 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:00:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:41 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 05:00:41 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2678682281' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.040 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.042 228708 DEBUG nova.virt.libvirt.vif [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:00:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1212649934',display_name='tempest-TestNetworkBasicOps-server-1212649934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1212649934',id=12,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFr4p+Tj1dvb/wiVsya8SvP1zErCGbOFykmQ8gcv/p5/lbFMClmWmZz+V+R2J4/FnQNyeHlKEMfk830hk0XOUxf9nxLetVMck9RFKnV9KV5aZJ747GG6nvQ9s1GdVx8JRA==',key_name='tempest-TestNetworkBasicOps-1398817829',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-32somqgt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:00:38Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.043 228708 DEBUG nova.network.os_vif_util [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.043 228708 DEBUG nova.network.os_vif_util [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:e4:e2,bridge_name='br-int',has_traffic_filtering=True,id=ed193a65-d9ea-43d6-9637-53f0924c6cb8,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped193a65-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.044 228708 DEBUG nova.objects.instance [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.058 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  <uuid>4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b</uuid>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  <name>instance-0000000c</name>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  <memory>131072</memory>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  <vcpu>1</vcpu>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  <metadata>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <nova:name>tempest-TestNetworkBasicOps-server-1212649934</nova:name>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <nova:creationTime>2025-11-25 10:00:40</nova:creationTime>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <nova:flavor name="m1.nano">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <nova:memory>128</nova:memory>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <nova:disk>1</nova:disk>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <nova:swap>0</nova:swap>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <nova:vcpus>1</nova:vcpus>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      </nova:flavor>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <nova:owner>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      </nova:owner>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <nova:ports>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <nova:port uuid="ed193a65-d9ea-43d6-9637-53f0924c6cb8">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        </nova:port>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      </nova:ports>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    </nova:instance>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  </metadata>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  <sysinfo type="smbios">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <system>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <entry name="manufacturer">RDO</entry>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <entry name="product">OpenStack Compute</entry>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <entry name="serial">4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b</entry>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <entry name="uuid">4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b</entry>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <entry name="family">Virtual Machine</entry>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    </system>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  </sysinfo>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  <os>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <boot dev="hd"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <smbios mode="sysinfo"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  </os>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  <features>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <acpi/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <apic/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <vmcoreinfo/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  </features>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  <clock offset="utc">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <timer name="hpet" present="no"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  </clock>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  <cpu mode="host-model" match="exact">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  </cpu>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  <devices>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <disk type="network" device="disk">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <driver type="raw" cache="none"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <source protocol="rbd" name="vms/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <host name="192.168.122.100" port="6789"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <host name="192.168.122.102" port="6789"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <host name="192.168.122.101" port="6789"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      </source>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <auth username="openstack">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      </auth>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <target dev="vda" bus="virtio"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    </disk>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <disk type="network" device="cdrom">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <driver type="raw" cache="none"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <source protocol="rbd" name="vms/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk.config">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <host name="192.168.122.100" port="6789"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <host name="192.168.122.102" port="6789"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <host name="192.168.122.101" port="6789"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      </source>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <auth username="openstack">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:        <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      </auth>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <target dev="sda" bus="sata"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    </disk>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <interface type="ethernet">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <mac address="fa:16:3e:48:e4:e2"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <model type="virtio"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <mtu size="1442"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <target dev="taped193a65-d9"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    </interface>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <serial type="pty">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <log file="/var/lib/nova/instances/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b/console.log" append="off"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    </serial>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <video>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <model type="virtio"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    </video>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <input type="tablet" bus="usb"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <rng model="virtio">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <backend model="random">/dev/urandom</backend>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    </rng>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <controller type="usb" index="0"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    <memballoon model="virtio">
Nov 25 05:00:41 np0005534696 nova_compute[228704]:      <stats period="10"/>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:    </memballoon>
Nov 25 05:00:41 np0005534696 nova_compute[228704]:  </devices>
Nov 25 05:00:41 np0005534696 nova_compute[228704]: </domain>
Nov 25 05:00:41 np0005534696 nova_compute[228704]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.059 228708 DEBUG nova.compute.manager [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Preparing to wait for external event network-vif-plugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.059 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.059 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.060 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.060 228708 DEBUG nova.virt.libvirt.vif [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:00:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1212649934',display_name='tempest-TestNetworkBasicOps-server-1212649934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1212649934',id=12,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFr4p+Tj1dvb/wiVsya8SvP1zErCGbOFykmQ8gcv/p5/lbFMClmWmZz+V+R2J4/FnQNyeHlKEMfk830hk0XOUxf9nxLetVMck9RFKnV9KV5aZJ747GG6nvQ9s1GdVx8JRA==',key_name='tempest-TestNetworkBasicOps-1398817829',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-32somqgt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:00:38Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.060 228708 DEBUG nova.network.os_vif_util [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.061 228708 DEBUG nova.network.os_vif_util [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:e4:e2,bridge_name='br-int',has_traffic_filtering=True,id=ed193a65-d9ea-43d6-9637-53f0924c6cb8,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped193a65-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.061 228708 DEBUG os_vif [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:e4:e2,bridge_name='br-int',has_traffic_filtering=True,id=ed193a65-d9ea-43d6-9637-53f0924c6cb8,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped193a65-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.062 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.062 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.062 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.065 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.065 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped193a65-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.066 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=taped193a65-d9, col_values=(('external_ids', {'iface-id': 'ed193a65-d9ea-43d6-9637-53f0924c6cb8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:e4:e2', 'vm-uuid': '4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.067 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 NetworkManager[48892]: <info>  [1764064841.0683] manager: (taped193a65-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.069 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.072 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.073 228708 INFO os_vif [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:e4:e2,bridge_name='br-int',has_traffic_filtering=True,id=ed193a65-d9ea-43d6-9637-53f0924c6cb8,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped193a65-d9')#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.106 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.106 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.106 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:48:e4:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.107 228708 INFO nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Using config drive#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.123 228708 DEBUG nova.storage.rbd_utils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:00:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:41.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.389 228708 INFO nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Creating config drive at /var/lib/nova/instances/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b/disk.config#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.393 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3ysol731 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.511 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3ysol731" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.532 228708 DEBUG nova.storage.rbd_utils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.534 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b/disk.config 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.615 228708 DEBUG oslo_concurrency.processutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b/disk.config 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.616 228708 INFO nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Deleting local config drive /var/lib/nova/instances/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b/disk.config because it was imported into RBD.#033[00m
Nov 25 05:00:41 np0005534696 kernel: taped193a65-d9: entered promiscuous mode
Nov 25 05:00:41 np0005534696 NetworkManager[48892]: <info>  [1764064841.6515] manager: (taped193a65-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.652 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 ovn_controller[133535]: 2025-11-25T10:00:41Z|00058|binding|INFO|Claiming lport ed193a65-d9ea-43d6-9637-53f0924c6cb8 for this chassis.
Nov 25 05:00:41 np0005534696 ovn_controller[133535]: 2025-11-25T10:00:41Z|00059|binding|INFO|ed193a65-d9ea-43d6-9637-53f0924c6cb8: Claiming fa:16:3e:48:e4:e2 10.100.0.3
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.656 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.659 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 NetworkManager[48892]: <info>  [1764064841.6679] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 25 05:00:41 np0005534696 NetworkManager[48892]: <info>  [1764064841.6682] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.667 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.667 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:e4:e2 10.100.0.3'], port_security=['fa:16:3e:48:e4:e2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee98bf99-3920-4a16-acac-021a1cbfa3eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6121cf32-17ed-44cd-a0b1-25d4c69fcad0, chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], logical_port=ed193a65-d9ea-43d6-9637-53f0924c6cb8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.671 142676 INFO neutron.agent.ovn.metadata.agent [-] Port ed193a65-d9ea-43d6-9637-53f0924c6cb8 in datapath 3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc bound to our chassis#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.672 142676 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc#033[00m
Nov 25 05:00:41 np0005534696 systemd-udevd[236696]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.685 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[5b70ea37-0045-4c37-b5aa-77d7806468b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.686 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3ce4f6a0-b1 in ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.688 232274 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3ce4f6a0-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.688 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[43ba800a-abe1-4387-b744-547a2920a9cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 NetworkManager[48892]: <info>  [1764064841.6904] device (taped193a65-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 05:00:41 np0005534696 NetworkManager[48892]: <info>  [1764064841.6911] device (taped193a65-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.690 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[bc61a957-e30d-4bc3-9436-aef9a1d019dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 systemd-machined[192760]: New machine qemu-3-instance-0000000c.
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.698 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[d570a81e-cbc9-4e38-b477-40fa6387a151]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 systemd[1]: Started Virtual Machine qemu-3-instance-0000000c.
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.720 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[3e4121a3-bed2-4a14-826e-064c4d76a05b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.746 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[d49d27b5-604c-45b2-81fa-9c09b1431aeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 systemd-udevd[236699]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 05:00:41 np0005534696 NetworkManager[48892]: <info>  [1764064841.7537] manager: (tap3ce4f6a0-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.754 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[99fb0202-c8be-4a2c-986a-a2882f723518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.757 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.761 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.764 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 ovn_controller[133535]: 2025-11-25T10:00:41Z|00060|binding|INFO|Setting lport ed193a65-d9ea-43d6-9637-53f0924c6cb8 ovn-installed in OVS
Nov 25 05:00:41 np0005534696 ovn_controller[133535]: 2025-11-25T10:00:41Z|00061|binding|INFO|Setting lport ed193a65-d9ea-43d6-9637-53f0924c6cb8 up in Southbound
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.782 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.785 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[0853da06-fde0-41be-8139-4ade44ea55f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.788 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9615a4-b132-409d-b891-9bc29057cf94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 NetworkManager[48892]: <info>  [1764064841.8034] device (tap3ce4f6a0-b0): carrier: link connected
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.807 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[dc67478e-0cf7-46a9-9f2d-08e0647ca6cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.819 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7c8943-dde0-4d1e-96bb-a0b0d3e4d57c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ce4f6a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:47:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 362322, 'reachable_time': 35148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236722, 'error': None, 'target': 'ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.829 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[80617f0e-bd41-44a7-8b44-c1eaef72c13b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:47f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 362322, 'tstamp': 362322}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236723, 'error': None, 'target': 'ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.842 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[656eca93-ff21-49bb-980c-b0e144795cde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ce4f6a0-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:47:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 362322, 'reachable_time': 35148, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 236724, 'error': None, 'target': 'ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.862 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3b77f4-15f8-497d-965f-30e35b3648ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.899 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[af3c8cc9-3320-438f-b9b5-2e57e1727529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.899 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ce4f6a0-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.900 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.900 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ce4f6a0-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.901 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 NetworkManager[48892]: <info>  [1764064841.9021] manager: (tap3ce4f6a0-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 25 05:00:41 np0005534696 kernel: tap3ce4f6a0-b0: entered promiscuous mode
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.905 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.906 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3ce4f6a0-b0, col_values=(('external_ids', {'iface-id': '5755bc32-958f-433d-9a4f-77334dafcf22'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.907 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 ovn_controller[133535]: 2025-11-25T10:00:41Z|00062|binding|INFO|Releasing lport 5755bc32-958f-433d-9a4f-77334dafcf22 from this chassis (sb_readonly=0)
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.908 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.909 142676 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.910 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[b855eb33-8077-412c-ab56-88e71f1461e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.910 142676 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: global
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    log         /dev/log local0 debug
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    log-tag     haproxy-metadata-proxy-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    user        root
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    group       root
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    maxconn     1024
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    pidfile     /var/lib/neutron/external/pids/3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc.pid.haproxy
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    daemon
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: defaults
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    log global
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    mode http
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    option httplog
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    option dontlognull
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    option http-server-close
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    option forwardfor
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    retries                 3
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    timeout http-request    30s
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    timeout connect         30s
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    timeout client          32s
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    timeout server          32s
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    timeout http-keep-alive 30s
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: listen listener
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    bind 169.254.169.254:80
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]:    http-request add-header X-OVN-Network-ID 3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 05:00:41 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:00:41.911 142676 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'env', 'PROCESS_TAG=haproxy-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 05:00:41 np0005534696 nova_compute[228704]: 2025-11-25 10:00:41.921 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:41 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:41 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:41 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:41 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:42 np0005534696 podman[236788]: 2025-11-25 10:00:42.195409225 +0000 UTC m=+0.033081157 container create fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 05:00:42 np0005534696 systemd[1]: Started libpod-conmon-fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331.scope.
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.237 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064842.2369843, 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.237 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] VM Started (Lifecycle Event)#033[00m
Nov 25 05:00:42 np0005534696 systemd[1]: Started libcrun container.
Nov 25 05:00:42 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a54269755d2298e5db04a0aaa323167c657cb0b9c382d17afcb2afd7ff61c1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 05:00:42 np0005534696 podman[236788]: 2025-11-25 10:00:42.254986478 +0000 UTC m=+0.092658430 container init fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.255 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.257 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064842.2370772, 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.257 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] VM Paused (Lifecycle Event)#033[00m
Nov 25 05:00:42 np0005534696 podman[236788]: 2025-11-25 10:00:42.259209116 +0000 UTC m=+0.096881047 container start fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 05:00:42 np0005534696 podman[236788]: 2025-11-25 10:00:42.180694292 +0000 UTC m=+0.018366245 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.268 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.270 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 05:00:42 np0005534696 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236806]: [NOTICE]   (236810) : New worker (236812) forked
Nov 25 05:00:42 np0005534696 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236806]: [NOTICE]   (236810) : Loading success.
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.286 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 05:00:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:42.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.484 228708 DEBUG nova.compute.manager [req-9d099298-4879-4671-bc89-fc23f23a80d9 req-b96e63d9-c7db-4e08-8ced-8852bdb356bb c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Received event network-vif-plugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.485 228708 DEBUG oslo_concurrency.lockutils [req-9d099298-4879-4671-bc89-fc23f23a80d9 req-b96e63d9-c7db-4e08-8ced-8852bdb356bb c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.485 228708 DEBUG oslo_concurrency.lockutils [req-9d099298-4879-4671-bc89-fc23f23a80d9 req-b96e63d9-c7db-4e08-8ced-8852bdb356bb c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.485 228708 DEBUG oslo_concurrency.lockutils [req-9d099298-4879-4671-bc89-fc23f23a80d9 req-b96e63d9-c7db-4e08-8ced-8852bdb356bb c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.485 228708 DEBUG nova.compute.manager [req-9d099298-4879-4671-bc89-fc23f23a80d9 req-b96e63d9-c7db-4e08-8ced-8852bdb356bb c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Processing event network-vif-plugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.486 228708 DEBUG nova.compute.manager [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.487 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064842.4878116, 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.488 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] VM Resumed (Lifecycle Event)#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.489 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.491 228708 INFO nova.virt.libvirt.driver [-] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Instance spawned successfully.#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.491 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.506 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.510 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.514 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.514 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.515 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.515 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.515 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.516 228708 DEBUG nova.virt.libvirt.driver [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.530 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.551 228708 INFO nova.compute.manager [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Took 4.44 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.552 228708 DEBUG nova.compute.manager [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.596 228708 INFO nova.compute.manager [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Took 5.10 seconds to build instance.#033[00m
Nov 25 05:00:42 np0005534696 nova_compute[228704]: 2025-11-25 10:00:42.607 228708 DEBUG oslo_concurrency.lockutils [None req-4586a22d-c74e-4492-9a2e-95cc78d88b7d c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:00:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:43.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:43 np0005534696 nova_compute[228704]: 2025-11-25 10:00:43.377 228708 DEBUG nova.network.neutron [req-d7bf4498-4cd4-44f1-b65c-b58e88d18f75 req-ea27ebb2-fef5-4107-9f1e-60e6c03d16e8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Updated VIF entry in instance network info cache for port ed193a65-d9ea-43d6-9637-53f0924c6cb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 05:00:43 np0005534696 nova_compute[228704]: 2025-11-25 10:00:43.378 228708 DEBUG nova.network.neutron [req-d7bf4498-4cd4-44f1-b65c-b58e88d18f75 req-ea27ebb2-fef5-4107-9f1e-60e6c03d16e8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Updating instance_info_cache with network_info: [{"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 05:00:43 np0005534696 nova_compute[228704]: 2025-11-25 10:00:43.392 228708 DEBUG oslo_concurrency.lockutils [req-d7bf4498-4cd4-44f1-b65c-b58e88d18f75 req-ea27ebb2-fef5-4107-9f1e-60e6c03d16e8 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 05:00:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:44.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:44 np0005534696 podman[236844]: 2025-11-25 10:00:44.331699676 +0000 UTC m=+0.043116721 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 05:00:44 np0005534696 nova_compute[228704]: 2025-11-25 10:00:44.554 228708 DEBUG nova.compute.manager [req-e7ac851e-e9a7-4fdc-90a1-dda09e6d71ad req-3a188f51-ab15-46ec-b354-8ce0e506c875 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Received event network-vif-plugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:00:44 np0005534696 nova_compute[228704]: 2025-11-25 10:00:44.555 228708 DEBUG oslo_concurrency.lockutils [req-e7ac851e-e9a7-4fdc-90a1-dda09e6d71ad req-3a188f51-ab15-46ec-b354-8ce0e506c875 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:00:44 np0005534696 nova_compute[228704]: 2025-11-25 10:00:44.555 228708 DEBUG oslo_concurrency.lockutils [req-e7ac851e-e9a7-4fdc-90a1-dda09e6d71ad req-3a188f51-ab15-46ec-b354-8ce0e506c875 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:00:44 np0005534696 nova_compute[228704]: 2025-11-25 10:00:44.555 228708 DEBUG oslo_concurrency.lockutils [req-e7ac851e-e9a7-4fdc-90a1-dda09e6d71ad req-3a188f51-ab15-46ec-b354-8ce0e506c875 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:00:44 np0005534696 nova_compute[228704]: 2025-11-25 10:00:44.555 228708 DEBUG nova.compute.manager [req-e7ac851e-e9a7-4fdc-90a1-dda09e6d71ad req-3a188f51-ab15-46ec-b354-8ce0e506c875 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] No waiting events found dispatching network-vif-plugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 05:00:44 np0005534696 nova_compute[228704]: 2025-11-25 10:00:44.555 228708 WARNING nova.compute.manager [req-e7ac851e-e9a7-4fdc-90a1-dda09e6d71ad req-3a188f51-ab15-46ec-b354-8ce0e506c875 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Received unexpected event network-vif-plugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 for instance with vm_state active and task_state None.#033[00m
Nov 25 05:00:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:45 np0005534696 nova_compute[228704]: 2025-11-25 10:00:45.073 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:45.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:46 np0005534696 nova_compute[228704]: 2025-11-25 10:00:46.068 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:46.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:47.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:47 np0005534696 nova_compute[228704]: 2025-11-25 10:00:47.432 228708 DEBUG nova.compute.manager [req-dcebc28e-f2ea-4852-9ee7-bccd9db06cc5 req-03ee32e8-c1c6-4078-b9e7-6f5508d10aa5 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Received event network-changed-ed193a65-d9ea-43d6-9637-53f0924c6cb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:00:47 np0005534696 nova_compute[228704]: 2025-11-25 10:00:47.432 228708 DEBUG nova.compute.manager [req-dcebc28e-f2ea-4852-9ee7-bccd9db06cc5 req-03ee32e8-c1c6-4078-b9e7-6f5508d10aa5 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Refreshing instance network info cache due to event network-changed-ed193a65-d9ea-43d6-9637-53f0924c6cb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 05:00:47 np0005534696 nova_compute[228704]: 2025-11-25 10:00:47.432 228708 DEBUG oslo_concurrency.lockutils [req-dcebc28e-f2ea-4852-9ee7-bccd9db06cc5 req-03ee32e8-c1c6-4078-b9e7-6f5508d10aa5 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 05:00:47 np0005534696 nova_compute[228704]: 2025-11-25 10:00:47.432 228708 DEBUG oslo_concurrency.lockutils [req-dcebc28e-f2ea-4852-9ee7-bccd9db06cc5 req-03ee32e8-c1c6-4078-b9e7-6f5508d10aa5 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 05:00:47 np0005534696 nova_compute[228704]: 2025-11-25 10:00:47.432 228708 DEBUG nova.network.neutron [req-dcebc28e-f2ea-4852-9ee7-bccd9db06cc5 req-03ee32e8-c1c6-4078-b9e7-6f5508d10aa5 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Refreshing network info cache for port ed193a65-d9ea-43d6-9637-53f0924c6cb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 05:00:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:00:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:48.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:00:48 np0005534696 nova_compute[228704]: 2025-11-25 10:00:48.509 228708 DEBUG nova.network.neutron [req-dcebc28e-f2ea-4852-9ee7-bccd9db06cc5 req-03ee32e8-c1c6-4078-b9e7-6f5508d10aa5 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Updated VIF entry in instance network info cache for port ed193a65-d9ea-43d6-9637-53f0924c6cb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 05:00:48 np0005534696 nova_compute[228704]: 2025-11-25 10:00:48.509 228708 DEBUG nova.network.neutron [req-dcebc28e-f2ea-4852-9ee7-bccd9db06cc5 req-03ee32e8-c1c6-4078-b9e7-6f5508d10aa5 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Updating instance_info_cache with network_info: [{"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 05:00:48 np0005534696 nova_compute[228704]: 2025-11-25 10:00:48.522 228708 DEBUG oslo_concurrency.lockutils [req-dcebc28e-f2ea-4852-9ee7-bccd9db06cc5 req-03ee32e8-c1c6-4078-b9e7-6f5508d10aa5 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 05:00:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:49.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:50 np0005534696 nova_compute[228704]: 2025-11-25 10:00:50.075 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:50.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:51 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:51 np0005534696 nova_compute[228704]: 2025-11-25 10:00:51.069 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:51.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:52.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:53 np0005534696 ovn_controller[133535]: 2025-11-25T10:00:53Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:e4:e2 10.100.0.3
Nov 25 05:00:53 np0005534696 ovn_controller[133535]: 2025-11-25T10:00:53Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:e4:e2 10.100.0.3
Nov 25 05:00:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:53.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Nov 25 05:00:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3267203020' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 25 05:00:53 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Nov 25 05:00:53 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3267203020' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 25 05:00:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:54.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:55 np0005534696 nova_compute[228704]: 2025-11-25 10:00:55.075 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:55.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:00:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:00:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:00:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:56 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:00:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:00:56 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:00:56 np0005534696 nova_compute[228704]: 2025-11-25 10:00:56.071 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:00:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:56.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:57.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:00:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:00:58.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:00:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:00:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:00:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:00:59.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:00:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:00:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:00:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:00:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:00 np0005534696 nova_compute[228704]: 2025-11-25 10:01:00.076 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:00.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:00 np0005534696 podman[236878]: 2025-11-25 10:01:00.331127061 +0000 UTC m=+0.039647172 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 05:01:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:01 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:01 np0005534696 nova_compute[228704]: 2025-11-25 10:01:01.073 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:01.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:02.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:03.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:04.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:05 np0005534696 nova_compute[228704]: 2025-11-25 10:01:05.078 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:05.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:05.355 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:05.356 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:05.357 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:06 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:06 np0005534696 nova_compute[228704]: 2025-11-25 10:01:06.074 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:06.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:07.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:07 np0005534696 podman[236937]: 2025-11-25 10:01:07.372170304 +0000 UTC m=+0.084364929 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 05:01:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:08.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.552 228708 DEBUG oslo_concurrency.lockutils [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.552 228708 DEBUG oslo_concurrency.lockutils [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.552 228708 DEBUG oslo_concurrency.lockutils [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.552 228708 DEBUG oslo_concurrency.lockutils [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.552 228708 DEBUG oslo_concurrency.lockutils [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.553 228708 INFO nova.compute.manager [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Terminating instance#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.554 228708 DEBUG nova.compute.manager [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 05:01:08 np0005534696 kernel: taped193a65-d9 (unregistering): left promiscuous mode
Nov 25 05:01:08 np0005534696 NetworkManager[48892]: <info>  [1764064868.5901] device (taped193a65-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 05:01:08 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:08Z|00063|binding|INFO|Releasing lport ed193a65-d9ea-43d6-9637-53f0924c6cb8 from this chassis (sb_readonly=0)
Nov 25 05:01:08 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:08Z|00064|binding|INFO|Setting lport ed193a65-d9ea-43d6-9637-53f0924c6cb8 down in Southbound
Nov 25 05:01:08 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:08Z|00065|binding|INFO|Removing iface taped193a65-d9 ovn-installed in OVS
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.597 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.616 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:08 np0005534696 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 25 05:01:08 np0005534696 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000c.scope: Consumed 10.710s CPU time.
Nov 25 05:01:08 np0005534696 systemd-machined[192760]: Machine qemu-3-instance-0000000c terminated.
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.672 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:e4:e2 10.100.0.3'], port_security=['fa:16:3e:48:e4:e2 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee98bf99-3920-4a16-acac-021a1cbfa3eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6121cf32-17ed-44cd-a0b1-25d4c69fcad0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], logical_port=ed193a65-d9ea-43d6-9637-53f0924c6cb8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.673 142676 INFO neutron.agent.ovn.metadata.agent [-] Port ed193a65-d9ea-43d6-9637-53f0924c6cb8 in datapath 3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc unbound from our chassis#033[00m
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.674 142676 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.675 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[35f7c0a4-eddc-48c8-b4de-7ce7d5a5e6da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.675 142676 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc namespace which is not needed anymore#033[00m
Nov 25 05:01:08 np0005534696 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236806]: [NOTICE]   (236810) : haproxy version is 2.8.14-c23fe91
Nov 25 05:01:08 np0005534696 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236806]: [NOTICE]   (236810) : path to executable is /usr/sbin/haproxy
Nov 25 05:01:08 np0005534696 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236806]: [WARNING]  (236810) : Exiting Master process...
Nov 25 05:01:08 np0005534696 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236806]: [ALERT]    (236810) : Current worker (236812) exited with code 143 (Terminated)
Nov 25 05:01:08 np0005534696 neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc[236806]: [WARNING]  (236810) : All workers exited. Exiting... (0)
Nov 25 05:01:08 np0005534696 systemd[1]: libpod-fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331.scope: Deactivated successfully.
Nov 25 05:01:08 np0005534696 podman[236982]: 2025-11-25 10:01:08.777358457 +0000 UTC m=+0.040956028 container died fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.777 228708 INFO nova.virt.libvirt.driver [-] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Instance destroyed successfully.#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.777 228708 DEBUG nova.objects.instance [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.790 228708 DEBUG nova.virt.libvirt.vif [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T10:00:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1212649934',display_name='tempest-TestNetworkBasicOps-server-1212649934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1212649934',id=12,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFr4p+Tj1dvb/wiVsya8SvP1zErCGbOFykmQ8gcv/p5/lbFMClmWmZz+V+R2J4/FnQNyeHlKEMfk830hk0XOUxf9nxLetVMck9RFKnV9KV5aZJ747GG6nvQ9s1GdVx8JRA==',key_name='tempest-TestNetworkBasicOps-1398817829',keypairs=<?>,launch_index=0,launched_at=2025-11-25T10:00:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-32somqgt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T10:00:42Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.791 228708 DEBUG nova.network.os_vif_util [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.792 228708 DEBUG nova.network.os_vif_util [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:e4:e2,bridge_name='br-int',has_traffic_filtering=True,id=ed193a65-d9ea-43d6-9637-53f0924c6cb8,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped193a65-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.792 228708 DEBUG os_vif [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:e4:e2,bridge_name='br-int',has_traffic_filtering=True,id=ed193a65-d9ea-43d6-9637-53f0924c6cb8,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped193a65-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 05:01:08 np0005534696 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331-userdata-shm.mount: Deactivated successfully.
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.795 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.795 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped193a65-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.798 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.801 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 05:01:08 np0005534696 systemd[1]: var-lib-containers-storage-overlay-2a54269755d2298e5db04a0aaa323167c657cb0b9c382d17afcb2afd7ff61c1d-merged.mount: Deactivated successfully.
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.803 228708 INFO os_vif [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:e4:e2,bridge_name='br-int',has_traffic_filtering=True,id=ed193a65-d9ea-43d6-9637-53f0924c6cb8,network=Network(3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='taped193a65-d9')#033[00m
Nov 25 05:01:08 np0005534696 podman[236982]: 2025-11-25 10:01:08.805091955 +0000 UTC m=+0.068689526 container cleanup fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 05:01:08 np0005534696 systemd[1]: libpod-conmon-fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331.scope: Deactivated successfully.
Nov 25 05:01:08 np0005534696 podman[237025]: 2025-11-25 10:01:08.854609979 +0000 UTC m=+0.031087089 container remove fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.859 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[97a67f5b-f2da-444b-947d-b2bce85a79b0]: (4, ('Tue Nov 25 10:01:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc (fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331)\nfe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331\nTue Nov 25 10:01:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc (fe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331)\nfe8e76ba732d1ceb2e488fe9620fe94204ff1fd35607ab9f40b179c5352c3331\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.860 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[b3480e21-e2d1-417a-814f-8b78b54c04bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.861 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ce4f6a0-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:01:08 np0005534696 kernel: tap3ce4f6a0-b0: left promiscuous mode
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.864 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.877 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.879 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[f7221981-7148-443c-84b6-99317adc4a15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.886 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[1bef55b0-d18b-4ef6-908f-0f1e37dae721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.888 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[737c0bac-12eb-4a00-ba59-44e8a15be058]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.900 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f5a4c0-f2f7-4cc1-a317-4f5f4a74e1e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 362316, 'reachable_time': 33698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237047, 'error': None, 'target': 'ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:08 np0005534696 systemd[1]: run-netns-ovnmeta\x2d3ce4f6a0\x2dbac0\x2d4c27\x2d8d9f\x2dea178a5a08dc.mount: Deactivated successfully.
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.901 142787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 05:01:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:08.901 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2d60ec-2f4e-4a19-ba50-63259b7102b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.963 228708 INFO nova.virt.libvirt.driver [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Deleting instance files /var/lib/nova/instances/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_del#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.963 228708 INFO nova.virt.libvirt.driver [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Deletion of /var/lib/nova/instances/4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b_del complete#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.967 228708 DEBUG nova.compute.manager [req-b50b4005-e239-4dd0-a181-19f70ad5359d req-7d8f1a8d-fca6-426d-bd90-d05b6749d711 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Received event network-vif-unplugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.967 228708 DEBUG oslo_concurrency.lockutils [req-b50b4005-e239-4dd0-a181-19f70ad5359d req-7d8f1a8d-fca6-426d-bd90-d05b6749d711 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.967 228708 DEBUG oslo_concurrency.lockutils [req-b50b4005-e239-4dd0-a181-19f70ad5359d req-7d8f1a8d-fca6-426d-bd90-d05b6749d711 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.967 228708 DEBUG oslo_concurrency.lockutils [req-b50b4005-e239-4dd0-a181-19f70ad5359d req-7d8f1a8d-fca6-426d-bd90-d05b6749d711 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.967 228708 DEBUG nova.compute.manager [req-b50b4005-e239-4dd0-a181-19f70ad5359d req-7d8f1a8d-fca6-426d-bd90-d05b6749d711 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] No waiting events found dispatching network-vif-unplugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 05:01:08 np0005534696 nova_compute[228704]: 2025-11-25 10:01:08.967 228708 DEBUG nova.compute.manager [req-b50b4005-e239-4dd0-a181-19f70ad5359d req-7d8f1a8d-fca6-426d-bd90-d05b6749d711 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Received event network-vif-unplugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 05:01:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.026 228708 INFO nova.compute.manager [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.026 228708 DEBUG oslo.service.loopingcall [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.026 228708 DEBUG nova.compute.manager [-] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.026 228708 DEBUG nova.network.neutron [-] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.129 228708 DEBUG nova.compute.manager [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Received event network-changed-ed193a65-d9ea-43d6-9637-53f0924c6cb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.129 228708 DEBUG nova.compute.manager [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Refreshing instance network info cache due to event network-changed-ed193a65-d9ea-43d6-9637-53f0924c6cb8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.129 228708 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.129 228708 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.130 228708 DEBUG nova.network.neutron [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Refreshing network info cache for port ed193a65-d9ea-43d6-9637-53f0924c6cb8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 05:01:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:09.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.699 228708 DEBUG nova.network.neutron [-] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.712 228708 INFO nova.compute.manager [-] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Took 0.69 seconds to deallocate network for instance.#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.746 228708 DEBUG oslo_concurrency.lockutils [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.746 228708 DEBUG oslo_concurrency.lockutils [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:09 np0005534696 nova_compute[228704]: 2025-11-25 10:01:09.792 228708 DEBUG oslo_concurrency.processutils [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:01:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:10 np0005534696 nova_compute[228704]: 2025-11-25 10:01:10.081 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:01:10 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1682084571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:01:10 np0005534696 nova_compute[228704]: 2025-11-25 10:01:10.128 228708 DEBUG oslo_concurrency.processutils [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:01:10 np0005534696 nova_compute[228704]: 2025-11-25 10:01:10.132 228708 DEBUG nova.compute.provider_tree [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:01:10 np0005534696 nova_compute[228704]: 2025-11-25 10:01:10.143 228708 DEBUG nova.scheduler.client.report [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:01:10 np0005534696 nova_compute[228704]: 2025-11-25 10:01:10.166 228708 DEBUG oslo_concurrency.lockutils [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:10 np0005534696 nova_compute[228704]: 2025-11-25 10:01:10.184 228708 INFO nova.scheduler.client.report [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b#033[00m
Nov 25 05:01:10 np0005534696 nova_compute[228704]: 2025-11-25 10:01:10.235 228708 DEBUG oslo_concurrency.lockutils [None req-5cdd875f-4ed5-4150-bde7-6759f1a5873a c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:10.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:11 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:11 np0005534696 nova_compute[228704]: 2025-11-25 10:01:11.051 228708 DEBUG nova.compute.manager [req-e3d16a07-1cca-4761-9dbb-6875dc8fce4b req-b83fefd0-171a-43bc-a63f-25ca795f787c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Received event network-vif-plugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:01:11 np0005534696 nova_compute[228704]: 2025-11-25 10:01:11.051 228708 DEBUG oslo_concurrency.lockutils [req-e3d16a07-1cca-4761-9dbb-6875dc8fce4b req-b83fefd0-171a-43bc-a63f-25ca795f787c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:11 np0005534696 nova_compute[228704]: 2025-11-25 10:01:11.051 228708 DEBUG oslo_concurrency.lockutils [req-e3d16a07-1cca-4761-9dbb-6875dc8fce4b req-b83fefd0-171a-43bc-a63f-25ca795f787c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:11 np0005534696 nova_compute[228704]: 2025-11-25 10:01:11.052 228708 DEBUG oslo_concurrency.lockutils [req-e3d16a07-1cca-4761-9dbb-6875dc8fce4b req-b83fefd0-171a-43bc-a63f-25ca795f787c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:11 np0005534696 nova_compute[228704]: 2025-11-25 10:01:11.052 228708 DEBUG nova.compute.manager [req-e3d16a07-1cca-4761-9dbb-6875dc8fce4b req-b83fefd0-171a-43bc-a63f-25ca795f787c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] No waiting events found dispatching network-vif-plugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 05:01:11 np0005534696 nova_compute[228704]: 2025-11-25 10:01:11.052 228708 WARNING nova.compute.manager [req-e3d16a07-1cca-4761-9dbb-6875dc8fce4b req-b83fefd0-171a-43bc-a63f-25ca795f787c c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Received unexpected event network-vif-plugged-ed193a65-d9ea-43d6-9637-53f0924c6cb8 for instance with vm_state deleted and task_state None.#033[00m
Nov 25 05:01:11 np0005534696 nova_compute[228704]: 2025-11-25 10:01:11.148 228708 DEBUG nova.network.neutron [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Updated VIF entry in instance network info cache for port ed193a65-d9ea-43d6-9637-53f0924c6cb8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 05:01:11 np0005534696 nova_compute[228704]: 2025-11-25 10:01:11.148 228708 DEBUG nova.network.neutron [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Updating instance_info_cache with network_info: [{"id": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "address": "fa:16:3e:48:e4:e2", "network": {"id": "3ce4f6a0-bac0-4c27-8d9f-ea178a5a08dc", "bridge": "br-int", "label": "tempest-network-smoke--1490140625", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "taped193a65-d9", "ovs_interfaceid": "ed193a65-d9ea-43d6-9637-53f0924c6cb8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 05:01:11 np0005534696 nova_compute[228704]: 2025-11-25 10:01:11.174 228708 DEBUG oslo_concurrency.lockutils [req-99019e9c-7599-400e-a27a-42555af8a738 req-1890031d-c533-4a7b-abe6-4190b1f114a6 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 05:01:11 np0005534696 nova_compute[228704]: 2025-11-25 10:01:11.241 228708 DEBUG nova.compute.manager [req-3d91033b-21b7-450a-8bfa-0c7675e5fcb4 req-81af3252-efcb-4c2a-a004-af2945b8cc7b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Received event network-vif-deleted-ed193a65-d9ea-43d6-9637-53f0924c6cb8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:01:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:11.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:12.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:13.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:13 np0005534696 nova_compute[228704]: 2025-11-25 10:01:13.799 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:14.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:15 np0005534696 nova_compute[228704]: 2025-11-25 10:01:15.084 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:15 np0005534696 podman[237078]: 2025-11-25 10:01:15.331388945 +0000 UTC m=+0.040209391 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 05:01:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:15.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:16 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:16.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:17.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:18.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:18 np0005534696 nova_compute[228704]: 2025-11-25 10:01:18.505 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:18 np0005534696 nova_compute[228704]: 2025-11-25 10:01:18.588 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:18 np0005534696 nova_compute[228704]: 2025-11-25 10:01:18.801 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:19.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:20 np0005534696 nova_compute[228704]: 2025-11-25 10:01:20.085 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:01:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:20.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:01:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:21 np0005534696 nova_compute[228704]: 2025-11-25 10:01:21.011 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:21 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:21.011 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 05:01:21 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:21.012 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 05:01:21 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:21.013 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:01:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:21.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:22.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.374 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.374 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.374 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.374 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.374 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:01:22 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:01:22 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/840095353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.710 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.936 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.938 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4901MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.938 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.939 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.986 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 05:01:22 np0005534696 nova_compute[228704]: 2025-11-25 10:01:22.987 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 05:01:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:23 np0005534696 nova_compute[228704]: 2025-11-25 10:01:23.022 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:01:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:23.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:23 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:01:23 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1525438121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:01:23 np0005534696 nova_compute[228704]: 2025-11-25 10:01:23.370 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:01:23 np0005534696 nova_compute[228704]: 2025-11-25 10:01:23.374 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:01:23 np0005534696 nova_compute[228704]: 2025-11-25 10:01:23.387 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:01:23 np0005534696 nova_compute[228704]: 2025-11-25 10:01:23.423 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 05:01:23 np0005534696 nova_compute[228704]: 2025-11-25 10:01:23.423 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:23 np0005534696 nova_compute[228704]: 2025-11-25 10:01:23.774 228708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064868.773852, 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 05:01:23 np0005534696 nova_compute[228704]: 2025-11-25 10:01:23.775 228708 INFO nova.compute.manager [-] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] VM Stopped (Lifecycle Event)#033[00m
Nov 25 05:01:23 np0005534696 nova_compute[228704]: 2025-11-25 10:01:23.788 228708 DEBUG nova.compute.manager [None req-7e551e95-c18e-4409-affe-c32a04404c20 - - - - - -] [instance: 4d68d7ae-aa5e-441a-93a2-5c41bc41bc1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 05:01:23 np0005534696 nova_compute[228704]: 2025-11-25 10:01:23.803 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:01:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:24.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:01:24 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 05:01:24 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:01:24 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:01:24 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 05:01:24 np0005534696 nova_compute[228704]: 2025-11-25 10:01:24.425 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:01:24 np0005534696 nova_compute[228704]: 2025-11-25 10:01:24.425 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:01:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:25 np0005534696 nova_compute[228704]: 2025-11-25 10:01:25.087 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:01:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:25.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:01:25 np0005534696 nova_compute[228704]: 2025-11-25 10:01:25.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:01:25 np0005534696 nova_compute[228704]: 2025-11-25 10:01:25.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 05:01:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:26 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:26 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:26 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:26 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:01:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:26.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:01:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:27.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:27 np0005534696 nova_compute[228704]: 2025-11-25 10:01:27.358 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:01:27 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:01:27 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:01:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:28.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:28 np0005534696 nova_compute[228704]: 2025-11-25 10:01:28.804 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:01:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:29.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:01:29 np0005534696 nova_compute[228704]: 2025-11-25 10:01:29.357 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:01:29 np0005534696 nova_compute[228704]: 2025-11-25 10:01:29.548 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "9f013e5c-902b-4b58-8656-1c3788e671be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:29 np0005534696 nova_compute[228704]: 2025-11-25 10:01:29.548 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:29 np0005534696 nova_compute[228704]: 2025-11-25 10:01:29.564 228708 DEBUG nova.compute.manager [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 25 05:01:29 np0005534696 nova_compute[228704]: 2025-11-25 10:01:29.626 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:29 np0005534696 nova_compute[228704]: 2025-11-25 10:01:29.627 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:29 np0005534696 nova_compute[228704]: 2025-11-25 10:01:29.633 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 25 05:01:29 np0005534696 nova_compute[228704]: 2025-11-25 10:01:29.633 228708 INFO nova.compute.claims [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Claim successful on node compute-2.ctlplane.example.com#033[00m
Nov 25 05:01:29 np0005534696 nova_compute[228704]: 2025-11-25 10:01:29.710 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:01:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:01:30 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2805373666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.062 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.069 228708 DEBUG nova.compute.provider_tree [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.085 228708 DEBUG nova.scheduler.client.report [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.088 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.102 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.102 228708 DEBUG nova.compute.manager [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.134 228708 DEBUG nova.compute.manager [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.134 228708 DEBUG nova.network.neutron [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.153 228708 INFO nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.166 228708 DEBUG nova.compute.manager [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.237 228708 DEBUG nova.compute.manager [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.238 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.238 228708 INFO nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Creating image(s)#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.265 228708 DEBUG nova.storage.rbd_utils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 9f013e5c-902b-4b58-8656-1c3788e671be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.291 228708 DEBUG nova.storage.rbd_utils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 9f013e5c-902b-4b58-8656-1c3788e671be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.311 228708 DEBUG nova.storage.rbd_utils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 9f013e5c-902b-4b58-8656-1c3788e671be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.315 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:01:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:30.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.357 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.358 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.363 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.363 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.364 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.364 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.387 228708 DEBUG nova.storage.rbd_utils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 9f013e5c-902b-4b58-8656-1c3788e671be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.389 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 9f013e5c-902b-4b58-8656-1c3788e671be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.404 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.405 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.431 228708 DEBUG nova.policy [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c92fada0e9fc4e9482d24b33b311d806', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.526 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eb4dea50f27669b9ca81c8a7c3cfbc69d1dcb0f9 9f013e5c-902b-4b58-8656-1c3788e671be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.582 228708 DEBUG nova.storage.rbd_utils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] resizing rbd image 9f013e5c-902b-4b58-8656-1c3788e671be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.644 228708 DEBUG nova.objects.instance [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'migration_context' on Instance uuid 9f013e5c-902b-4b58-8656-1c3788e671be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.658 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.659 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Ensure instance console log exists: /var/lib/nova/instances/9f013e5c-902b-4b58-8656-1c3788e671be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.659 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.660 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.660 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:30 np0005534696 nova_compute[228704]: 2025-11-25 10:01:30.973 228708 DEBUG nova.network.neutron [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Successfully created port: a184d330-a899-40a5-acc3-3af79dd5853e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 25 05:01:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:31 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:31.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:31 np0005534696 podman[237475]: 2025-11-25 10:01:31.353822941 +0000 UTC m=+0.052835446 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 05:01:31 np0005534696 nova_compute[228704]: 2025-11-25 10:01:31.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:01:31 np0005534696 nova_compute[228704]: 2025-11-25 10:01:31.883 228708 DEBUG nova.network.neutron [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Successfully updated port: a184d330-a899-40a5-acc3-3af79dd5853e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 25 05:01:31 np0005534696 nova_compute[228704]: 2025-11-25 10:01:31.972 228708 DEBUG nova.compute.manager [req-8c4a757c-5c6e-46e8-a258-d34dc7674eb9 req-7d79daf1-2c61-49e4-a89a-dfed41331ca3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Received event network-changed-a184d330-a899-40a5-acc3-3af79dd5853e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:01:31 np0005534696 nova_compute[228704]: 2025-11-25 10:01:31.973 228708 DEBUG nova.compute.manager [req-8c4a757c-5c6e-46e8-a258-d34dc7674eb9 req-7d79daf1-2c61-49e4-a89a-dfed41331ca3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Refreshing instance network info cache due to event network-changed-a184d330-a899-40a5-acc3-3af79dd5853e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 05:01:31 np0005534696 nova_compute[228704]: 2025-11-25 10:01:31.973 228708 DEBUG oslo_concurrency.lockutils [req-8c4a757c-5c6e-46e8-a258-d34dc7674eb9 req-7d79daf1-2c61-49e4-a89a-dfed41331ca3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 05:01:31 np0005534696 nova_compute[228704]: 2025-11-25 10:01:31.973 228708 DEBUG oslo_concurrency.lockutils [req-8c4a757c-5c6e-46e8-a258-d34dc7674eb9 req-7d79daf1-2c61-49e4-a89a-dfed41331ca3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 05:01:31 np0005534696 nova_compute[228704]: 2025-11-25 10:01:31.973 228708 DEBUG nova.network.neutron [req-8c4a757c-5c6e-46e8-a258-d34dc7674eb9 req-7d79daf1-2c61-49e4-a89a-dfed41331ca3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Refreshing network info cache for port a184d330-a899-40a5-acc3-3af79dd5853e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 05:01:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:32 np0005534696 nova_compute[228704]: 2025-11-25 10:01:32.073 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 05:01:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:32.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:32 np0005534696 nova_compute[228704]: 2025-11-25 10:01:32.378 228708 DEBUG nova.network.neutron [req-8c4a757c-5c6e-46e8-a258-d34dc7674eb9 req-7d79daf1-2c61-49e4-a89a-dfed41331ca3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 05:01:32 np0005534696 nova_compute[228704]: 2025-11-25 10:01:32.603 228708 DEBUG nova.network.neutron [req-8c4a757c-5c6e-46e8-a258-d34dc7674eb9 req-7d79daf1-2c61-49e4-a89a-dfed41331ca3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 05:01:32 np0005534696 nova_compute[228704]: 2025-11-25 10:01:32.615 228708 DEBUG oslo_concurrency.lockutils [req-8c4a757c-5c6e-46e8-a258-d34dc7674eb9 req-7d79daf1-2c61-49e4-a89a-dfed41331ca3 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 05:01:32 np0005534696 nova_compute[228704]: 2025-11-25 10:01:32.615 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquired lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 05:01:32 np0005534696 nova_compute[228704]: 2025-11-25 10:01:32.616 228708 DEBUG nova.network.neutron [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 25 05:01:32 np0005534696 nova_compute[228704]: 2025-11-25 10:01:32.794 228708 DEBUG nova.network.neutron [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 25 05:01:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:33.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.352 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.806 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.869 228708 DEBUG nova.network.neutron [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Updating instance_info_cache with network_info: [{"id": "a184d330-a899-40a5-acc3-3af79dd5853e", "address": "fa:16:3e:19:49:88", "network": {"id": "dd276fac-68ad-4f7b-84fc-64569c26436e", "bridge": "br-int", "label": "tempest-network-smoke--217339193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa184d330-a8", "ovs_interfaceid": "a184d330-a899-40a5-acc3-3af79dd5853e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.884 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Releasing lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.884 228708 DEBUG nova.compute.manager [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Instance network_info: |[{"id": "a184d330-a899-40a5-acc3-3af79dd5853e", "address": "fa:16:3e:19:49:88", "network": {"id": "dd276fac-68ad-4f7b-84fc-64569c26436e", "bridge": "br-int", "label": "tempest-network-smoke--217339193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa184d330-a8", "ovs_interfaceid": "a184d330-a899-40a5-acc3-3af79dd5853e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.886 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Start _get_guest_xml network_info=[{"id": "a184d330-a899-40a5-acc3-3af79dd5853e", "address": "fa:16:3e:19:49:88", "network": {"id": "dd276fac-68ad-4f7b-84fc-64569c26436e", "bridge": "br-int", "label": "tempest-network-smoke--217339193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa184d330-a8", "ovs_interfaceid": "a184d330-a899-40a5-acc3-3af79dd5853e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encrypted': False, 'size': 0, 'guest_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'boot_index': 0, 'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'image_id': '62ddd1b7-1bba-493e-a10f-b03a12ab3457'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.890 228708 WARNING nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.895 228708 DEBUG nova.virt.libvirt.host [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.895 228708 DEBUG nova.virt.libvirt.host [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.897 228708 DEBUG nova.virt.libvirt.host [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.898 228708 DEBUG nova.virt.libvirt.host [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.898 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.898 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T09:51:47Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d76f382e-b0e4-4c25-9fed-0129b4e3facf',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T09:51:49Z,direct_url=<?>,disk_format='qcow2',id=62ddd1b7-1bba-493e-a10f-b03a12ab3457,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='f414368112e54eacbcaf4af631b3b667',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T09:51:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.898 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.898 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.899 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.899 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.899 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.899 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.899 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.899 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.900 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.900 228708 DEBUG nova.virt.hardware [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 25 05:01:33 np0005534696 nova_compute[228704]: 2025-11-25 10:01:33.902 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:01:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:34 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 05:01:34 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/295942382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.263 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.283 228708 DEBUG nova.storage.rbd_utils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 9f013e5c-902b-4b58-8656-1c3788e671be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.286 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:01:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:01:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:34.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:01:34 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 25 05:01:34 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1407742521' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.627 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.628 228708 DEBUG nova.virt.libvirt.vif [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:01:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1639333252',display_name='tempest-TestNetworkBasicOps-server-1639333252',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1639333252',id=13,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELe2XyeB3Hp0jVOZZweS/eo3OV1PQJW5T4XALveJ001xNixVkH2jUpCWQNq58pd4qk5U0SvW8D83cOaXuPyEddHvS7Y/4XXY4odokYWz9B9aIfKLDw7+EQMYoe8Tkc9zg==',key_name='tempest-TestNetworkBasicOps-613762435',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-md2iio3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:01:30Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=9f013e5c-902b-4b58-8656-1c3788e671be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a184d330-a899-40a5-acc3-3af79dd5853e", "address": "fa:16:3e:19:49:88", "network": {"id": "dd276fac-68ad-4f7b-84fc-64569c26436e", "bridge": "br-int", "label": "tempest-network-smoke--217339193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa184d330-a8", "ovs_interfaceid": "a184d330-a899-40a5-acc3-3af79dd5853e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.629 228708 DEBUG nova.network.os_vif_util [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "a184d330-a899-40a5-acc3-3af79dd5853e", "address": "fa:16:3e:19:49:88", "network": {"id": "dd276fac-68ad-4f7b-84fc-64569c26436e", "bridge": "br-int", "label": "tempest-network-smoke--217339193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa184d330-a8", "ovs_interfaceid": "a184d330-a899-40a5-acc3-3af79dd5853e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.630 228708 DEBUG nova.network.os_vif_util [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:49:88,bridge_name='br-int',has_traffic_filtering=True,id=a184d330-a899-40a5-acc3-3af79dd5853e,network=Network(dd276fac-68ad-4f7b-84fc-64569c26436e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa184d330-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.631 228708 DEBUG nova.objects.instance [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f013e5c-902b-4b58-8656-1c3788e671be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.642 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] End _get_guest_xml xml=<domain type="kvm">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  <uuid>9f013e5c-902b-4b58-8656-1c3788e671be</uuid>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  <name>instance-0000000d</name>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  <memory>131072</memory>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  <vcpu>1</vcpu>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  <metadata>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <nova:name>tempest-TestNetworkBasicOps-server-1639333252</nova:name>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <nova:creationTime>2025-11-25 10:01:33</nova:creationTime>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <nova:flavor name="m1.nano">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <nova:memory>128</nova:memory>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <nova:disk>1</nova:disk>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <nova:swap>0</nova:swap>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <nova:ephemeral>0</nova:ephemeral>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <nova:vcpus>1</nova:vcpus>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      </nova:flavor>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <nova:owner>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <nova:user uuid="c92fada0e9fc4e9482d24b33b311d806">tempest-TestNetworkBasicOps-804701909-project-member</nova:user>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <nova:project uuid="fc0c386067c7443085ef3a11d7bc772f">tempest-TestNetworkBasicOps-804701909</nova:project>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      </nova:owner>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <nova:root type="image" uuid="62ddd1b7-1bba-493e-a10f-b03a12ab3457"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <nova:ports>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <nova:port uuid="a184d330-a899-40a5-acc3-3af79dd5853e">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        </nova:port>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      </nova:ports>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    </nova:instance>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  </metadata>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  <sysinfo type="smbios">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <system>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <entry name="manufacturer">RDO</entry>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <entry name="product">OpenStack Compute</entry>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <entry name="serial">9f013e5c-902b-4b58-8656-1c3788e671be</entry>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <entry name="uuid">9f013e5c-902b-4b58-8656-1c3788e671be</entry>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <entry name="family">Virtual Machine</entry>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    </system>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  </sysinfo>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  <os>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <boot dev="hd"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <smbios mode="sysinfo"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  </os>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  <features>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <acpi/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <apic/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <vmcoreinfo/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  </features>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  <clock offset="utc">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <timer name="pit" tickpolicy="delay"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <timer name="hpet" present="no"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  </clock>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  <cpu mode="host-model" match="exact">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <topology sockets="1" cores="1" threads="1"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  </cpu>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  <devices>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <disk type="network" device="disk">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <driver type="raw" cache="none"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <source protocol="rbd" name="vms/9f013e5c-902b-4b58-8656-1c3788e671be_disk">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <host name="192.168.122.100" port="6789"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <host name="192.168.122.102" port="6789"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <host name="192.168.122.101" port="6789"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      </source>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <auth username="openstack">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      </auth>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <target dev="vda" bus="virtio"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    </disk>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <disk type="network" device="cdrom">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <driver type="raw" cache="none"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <source protocol="rbd" name="vms/9f013e5c-902b-4b58-8656-1c3788e671be_disk.config">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <host name="192.168.122.100" port="6789"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <host name="192.168.122.102" port="6789"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <host name="192.168.122.101" port="6789"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      </source>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <auth username="openstack">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:        <secret type="ceph" uuid="af1c9ae3-08d7-5547-a53d-2cccf7c6ef90"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      </auth>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <target dev="sda" bus="sata"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    </disk>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <interface type="ethernet">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <mac address="fa:16:3e:19:49:88"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <model type="virtio"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <driver name="vhost" rx_queue_size="512"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <mtu size="1442"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <target dev="tapa184d330-a8"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    </interface>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <serial type="pty">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <log file="/var/lib/nova/instances/9f013e5c-902b-4b58-8656-1c3788e671be/console.log" append="off"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    </serial>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <video>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <model type="virtio"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    </video>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <input type="tablet" bus="usb"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <rng model="virtio">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <backend model="random">/dev/urandom</backend>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    </rng>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="pci" model="pcie-root-port"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <controller type="usb" index="0"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    <memballoon model="virtio">
Nov 25 05:01:34 np0005534696 nova_compute[228704]:      <stats period="10"/>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:    </memballoon>
Nov 25 05:01:34 np0005534696 nova_compute[228704]:  </devices>
Nov 25 05:01:34 np0005534696 nova_compute[228704]: </domain>
Nov 25 05:01:34 np0005534696 nova_compute[228704]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.643 228708 DEBUG nova.compute.manager [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Preparing to wait for external event network-vif-plugged-a184d330-a899-40a5-acc3-3af79dd5853e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.643 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.643 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.644 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.644 228708 DEBUG nova.virt.libvirt.vif [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:01:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1639333252',display_name='tempest-TestNetworkBasicOps-server-1639333252',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1639333252',id=13,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELe2XyeB3Hp0jVOZZweS/eo3OV1PQJW5T4XALveJ001xNixVkH2jUpCWQNq58pd4qk5U0SvW8D83cOaXuPyEddHvS7Y/4XXY4odokYWz9B9aIfKLDw7+EQMYoe8Tkc9zg==',key_name='tempest-TestNetworkBasicOps-613762435',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-md2iio3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:01:30Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=9f013e5c-902b-4b58-8656-1c3788e671be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a184d330-a899-40a5-acc3-3af79dd5853e", "address": "fa:16:3e:19:49:88", "network": {"id": "dd276fac-68ad-4f7b-84fc-64569c26436e", "bridge": "br-int", "label": "tempest-network-smoke--217339193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa184d330-a8", "ovs_interfaceid": "a184d330-a899-40a5-acc3-3af79dd5853e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.644 228708 DEBUG nova.network.os_vif_util [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "a184d330-a899-40a5-acc3-3af79dd5853e", "address": "fa:16:3e:19:49:88", "network": {"id": "dd276fac-68ad-4f7b-84fc-64569c26436e", "bridge": "br-int", "label": "tempest-network-smoke--217339193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa184d330-a8", "ovs_interfaceid": "a184d330-a899-40a5-acc3-3af79dd5853e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.645 228708 DEBUG nova.network.os_vif_util [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:49:88,bridge_name='br-int',has_traffic_filtering=True,id=a184d330-a899-40a5-acc3-3af79dd5853e,network=Network(dd276fac-68ad-4f7b-84fc-64569c26436e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa184d330-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.645 228708 DEBUG os_vif [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:49:88,bridge_name='br-int',has_traffic_filtering=True,id=a184d330-a899-40a5-acc3-3af79dd5853e,network=Network(dd276fac-68ad-4f7b-84fc-64569c26436e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa184d330-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.646 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.646 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.646 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.649 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.649 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa184d330-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.649 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa184d330-a8, col_values=(('external_ids', {'iface-id': 'a184d330-a899-40a5-acc3-3af79dd5853e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:49:88', 'vm-uuid': '9f013e5c-902b-4b58-8656-1c3788e671be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.650 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:34 np0005534696 NetworkManager[48892]: <info>  [1764064894.6519] manager: (tapa184d330-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.652 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.656 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.657 228708 INFO os_vif [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:49:88,bridge_name='br-int',has_traffic_filtering=True,id=a184d330-a899-40a5-acc3-3af79dd5853e,network=Network(dd276fac-68ad-4f7b-84fc-64569c26436e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa184d330-a8')#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.682 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.682 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.682 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] No VIF found with MAC fa:16:3e:19:49:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.683 228708 INFO nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Using config drive#033[00m
Nov 25 05:01:34 np0005534696 nova_compute[228704]: 2025-11-25 10:01:34.701 228708 DEBUG nova.storage.rbd_utils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 9f013e5c-902b-4b58-8656-1c3788e671be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:01:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.088 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:35.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.569 228708 INFO nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Creating config drive at /var/lib/nova/instances/9f013e5c-902b-4b58-8656-1c3788e671be/disk.config#033[00m
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.573 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f013e5c-902b-4b58-8656-1c3788e671be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsisr2b6b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.702 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f013e5c-902b-4b58-8656-1c3788e671be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsisr2b6b" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.727 228708 DEBUG nova.storage.rbd_utils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] rbd image 9f013e5c-902b-4b58-8656-1c3788e671be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.731 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f013e5c-902b-4b58-8656-1c3788e671be/disk.config 9f013e5c-902b-4b58-8656-1c3788e671be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.827 228708 DEBUG oslo_concurrency.processutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f013e5c-902b-4b58-8656-1c3788e671be/disk.config 9f013e5c-902b-4b58-8656-1c3788e671be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.828 228708 INFO nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Deleting local config drive /var/lib/nova/instances/9f013e5c-902b-4b58-8656-1c3788e671be/disk.config because it was imported into RBD.#033[00m
Nov 25 05:01:35 np0005534696 kernel: tapa184d330-a8: entered promiscuous mode
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.870 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.873 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:35 np0005534696 NetworkManager[48892]: <info>  [1764064895.8759] manager: (tapa184d330-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Nov 25 05:01:35 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:35Z|00066|binding|INFO|Claiming lport a184d330-a899-40a5-acc3-3af79dd5853e for this chassis.
Nov 25 05:01:35 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:35Z|00067|binding|INFO|a184d330-a899-40a5-acc3-3af79dd5853e: Claiming fa:16:3e:19:49:88 10.100.0.3
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.888 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:49:88 10.100.0.3'], port_security=['fa:16:3e:19:49:88 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9f013e5c-902b-4b58-8656-1c3788e671be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd276fac-68ad-4f7b-84fc-64569c26436e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b75b8d66-4fb3-472a-a751-9610237a66a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b99af3f9-ef06-4225-90e4-16d8f4ebb7da, chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], logical_port=a184d330-a899-40a5-acc3-3af79dd5853e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.891 142676 INFO neutron.agent.ovn.metadata.agent [-] Port a184d330-a899-40a5-acc3-3af79dd5853e in datapath dd276fac-68ad-4f7b-84fc-64569c26436e bound to our chassis#033[00m
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.893 142676 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dd276fac-68ad-4f7b-84fc-64569c26436e#033[00m
Nov 25 05:01:35 np0005534696 systemd-udevd[237632]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.907 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[5341b4eb-674d-4041-8b04-908e5f84b09c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.909 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdd276fac-61 in ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.912 232274 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdd276fac-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.912 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[763e9d81-c503-43e2-bf34-a070f732f8f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.914 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[38659dd9-5148-4db4-a259-2f1a4bde2e56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:35 np0005534696 NetworkManager[48892]: <info>  [1764064895.9166] device (tapa184d330-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 05:01:35 np0005534696 NetworkManager[48892]: <info>  [1764064895.9174] device (tapa184d330-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 05:01:35 np0005534696 systemd-machined[192760]: New machine qemu-4-instance-0000000d.
Nov 25 05:01:35 np0005534696 systemd[1]: Started Virtual Machine qemu-4-instance-0000000d.
Nov 25 05:01:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.923 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[934cf1b0-82d5-485d-84ba-7882057bc7dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.944 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.947 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[00989d94-4c1f-4e56-82c1-c45bb0ee71b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:35 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:35Z|00068|binding|INFO|Setting lport a184d330-a899-40a5-acc3-3af79dd5853e ovn-installed in OVS
Nov 25 05:01:35 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:35Z|00069|binding|INFO|Setting lport a184d330-a899-40a5-acc3-3af79dd5853e up in Southbound
Nov 25 05:01:35 np0005534696 nova_compute[228704]: 2025-11-25 10:01:35.951 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.970 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[f77ccfb6-716a-4fb9-bd18-d23977ad955e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:35 np0005534696 NetworkManager[48892]: <info>  [1764064895.9752] manager: (tapdd276fac-60): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.976 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3541ef-26e2-421b-ba30-4ca084bd0f75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:35 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:35.998 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[b4980e41-cd00-4702-8c8c-dc3d7eaba2c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:36 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:36 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:36 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.000 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[d8944a8f-56cf-4d6d-946f-fd58392db088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:36 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:36 np0005534696 NetworkManager[48892]: <info>  [1764064896.0183] device (tapdd276fac-60): carrier: link connected
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.021 232331 DEBUG oslo.privsep.daemon [-] privsep: reply[189b8b7e-d18b-41b6-8d0b-1bba7ac957a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.034 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d357f8-c206-4a3e-a142-769edac6387e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd276fac-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:2d:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 367744, 'reachable_time': 24243, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237658, 'error': None, 'target': 'ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.047 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[a490b55e-1763-4fad-aa25-da8dd7a8e51b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:2d20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 367744, 'tstamp': 367744}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237659, 'error': None, 'target': 'ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.062 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[1d3b7eb1-8255-47bc-aa76-afecef566b5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdd276fac-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:2d:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 367744, 'reachable_time': 24243, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237660, 'error': None, 'target': 'ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.085 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4e2659-7b72-413a-97be-b136c08a903b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.124 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[8726d5c9-5d52-4a11-b67e-3f5848568f0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.125 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd276fac-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.125 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.125 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd276fac-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:01:36 np0005534696 kernel: tapdd276fac-60: entered promiscuous mode
Nov 25 05:01:36 np0005534696 NetworkManager[48892]: <info>  [1764064896.1278] manager: (tapdd276fac-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.127 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.132 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdd276fac-60, col_values=(('external_ids', {'iface-id': '99717449-3e6b-46ba-a23f-2ebbd1440910'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.133 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:36 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:36Z|00070|binding|INFO|Releasing lport 99717449-3e6b-46ba-a23f-2ebbd1440910 from this chassis (sb_readonly=0)
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.134 142676 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dd276fac-68ad-4f7b-84fc-64569c26436e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dd276fac-68ad-4f7b-84fc-64569c26436e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.135 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9fa78f-e3a6-4585-bce1-e998d769c90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.135 142676 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: global
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    log         /dev/log local0 debug
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    log-tag     haproxy-metadata-proxy-dd276fac-68ad-4f7b-84fc-64569c26436e
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    user        root
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    group       root
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    maxconn     1024
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    pidfile     /var/lib/neutron/external/pids/dd276fac-68ad-4f7b-84fc-64569c26436e.pid.haproxy
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    daemon
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: defaults
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    log global
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    mode http
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    option httplog
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    option dontlognull
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    option http-server-close
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    option forwardfor
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    retries                 3
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    timeout http-request    30s
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    timeout connect         30s
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    timeout client          32s
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    timeout server          32s
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    timeout http-keep-alive 30s
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: listen listener
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    bind 169.254.169.254:80
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    server metadata /var/lib/neutron/metadata_proxy
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]:    http-request add-header X-OVN-Network-ID dd276fac-68ad-4f7b-84fc-64569c26436e
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 25 05:01:36 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:36.136 142676 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e', 'env', 'PROCESS_TAG=haproxy-dd276fac-68ad-4f7b-84fc-64569c26436e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dd276fac-68ad-4f7b-84fc-64569c26436e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.146 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:36.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:36 np0005534696 podman[237689]: 2025-11-25 10:01:36.452941964 +0000 UTC m=+0.033697098 container create 13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 05:01:36 np0005534696 systemd[1]: Started libpod-conmon-13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46.scope.
Nov 25 05:01:36 np0005534696 systemd[1]: Started libcrun container.
Nov 25 05:01:36 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7592523b321786da5b476426e6e65005e719ef0f7bc77ff843f39c92aaf304dc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 05:01:36 np0005534696 podman[237689]: 2025-11-25 10:01:36.515400798 +0000 UTC m=+0.096155932 container init 13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 05:01:36 np0005534696 podman[237689]: 2025-11-25 10:01:36.519603809 +0000 UTC m=+0.100358933 container start 13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 05:01:36 np0005534696 podman[237689]: 2025-11-25 10:01:36.438224578 +0000 UTC m=+0.018979722 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620
Nov 25 05:01:36 np0005534696 neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e[237701]: [NOTICE]   (237705) : New worker (237707) forked
Nov 25 05:01:36 np0005534696 neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e[237701]: [NOTICE]   (237705) : Loading success.
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.602 228708 DEBUG nova.compute.manager [req-80bd6f77-6b05-4c2c-9d8c-4333e9d755ed req-3796f248-23f9-435b-89e9-33350ec598a1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Received event network-vif-plugged-a184d330-a899-40a5-acc3-3af79dd5853e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.603 228708 DEBUG oslo_concurrency.lockutils [req-80bd6f77-6b05-4c2c-9d8c-4333e9d755ed req-3796f248-23f9-435b-89e9-33350ec598a1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.603 228708 DEBUG oslo_concurrency.lockutils [req-80bd6f77-6b05-4c2c-9d8c-4333e9d755ed req-3796f248-23f9-435b-89e9-33350ec598a1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.604 228708 DEBUG oslo_concurrency.lockutils [req-80bd6f77-6b05-4c2c-9d8c-4333e9d755ed req-3796f248-23f9-435b-89e9-33350ec598a1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.604 228708 DEBUG nova.compute.manager [req-80bd6f77-6b05-4c2c-9d8c-4333e9d755ed req-3796f248-23f9-435b-89e9-33350ec598a1 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Processing event network-vif-plugged-a184d330-a899-40a5-acc3-3af79dd5853e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.938 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064896.9384696, 9f013e5c-902b-4b58-8656-1c3788e671be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.939 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] VM Started (Lifecycle Event)#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.941 228708 DEBUG nova.compute.manager [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.947 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.951 228708 INFO nova.virt.libvirt.driver [-] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Instance spawned successfully.#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.952 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.962 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.964 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.973 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.974 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.975 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.975 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.975 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.976 228708 DEBUG nova.virt.libvirt.driver [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.979 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.979 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064896.9385715, 9f013e5c-902b-4b58-8656-1c3788e671be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 05:01:36 np0005534696 nova_compute[228704]: 2025-11-25 10:01:36.980 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] VM Paused (Lifecycle Event)#033[00m
Nov 25 05:01:37 np0005534696 nova_compute[228704]: 2025-11-25 10:01:37.000 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 05:01:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:37 np0005534696 nova_compute[228704]: 2025-11-25 10:01:37.003 228708 DEBUG nova.virt.driver [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] Emitting event <LifecycleEvent: 1764064896.9437044, 9f013e5c-902b-4b58-8656-1c3788e671be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 05:01:37 np0005534696 nova_compute[228704]: 2025-11-25 10:01:37.003 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] VM Resumed (Lifecycle Event)#033[00m
Nov 25 05:01:37 np0005534696 nova_compute[228704]: 2025-11-25 10:01:37.019 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 05:01:37 np0005534696 nova_compute[228704]: 2025-11-25 10:01:37.022 228708 DEBUG nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 25 05:01:37 np0005534696 nova_compute[228704]: 2025-11-25 10:01:37.046 228708 INFO nova.compute.manager [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Took 6.81 seconds to spawn the instance on the hypervisor.#033[00m
Nov 25 05:01:37 np0005534696 nova_compute[228704]: 2025-11-25 10:01:37.047 228708 DEBUG nova.compute.manager [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 05:01:37 np0005534696 nova_compute[228704]: 2025-11-25 10:01:37.048 228708 INFO nova.compute.manager [None req-430cb5e8-65b3-4e33-9376-c32ad6ad1744 - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 25 05:01:37 np0005534696 nova_compute[228704]: 2025-11-25 10:01:37.088 228708 INFO nova.compute.manager [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Took 7.49 seconds to build instance.#033[00m
Nov 25 05:01:37 np0005534696 nova_compute[228704]: 2025-11-25 10:01:37.107 228708 DEBUG oslo_concurrency.lockutils [None req-294bfee9-89db-4aa5-9965-429d901fadc8 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:37.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:38.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:38 np0005534696 podman[237756]: 2025-11-25 10:01:38.368239903 +0000 UTC m=+0.074536436 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 05:01:38 np0005534696 nova_compute[228704]: 2025-11-25 10:01:38.678 228708 DEBUG nova.compute.manager [req-a853da8e-53e7-4155-8e01-d468790134d6 req-4dcaddd6-cb82-42e1-a038-d2af88b4c207 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Received event network-vif-plugged-a184d330-a899-40a5-acc3-3af79dd5853e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:01:38 np0005534696 nova_compute[228704]: 2025-11-25 10:01:38.678 228708 DEBUG oslo_concurrency.lockutils [req-a853da8e-53e7-4155-8e01-d468790134d6 req-4dcaddd6-cb82-42e1-a038-d2af88b4c207 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:38 np0005534696 nova_compute[228704]: 2025-11-25 10:01:38.679 228708 DEBUG oslo_concurrency.lockutils [req-a853da8e-53e7-4155-8e01-d468790134d6 req-4dcaddd6-cb82-42e1-a038-d2af88b4c207 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:38 np0005534696 nova_compute[228704]: 2025-11-25 10:01:38.679 228708 DEBUG oslo_concurrency.lockutils [req-a853da8e-53e7-4155-8e01-d468790134d6 req-4dcaddd6-cb82-42e1-a038-d2af88b4c207 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:38 np0005534696 nova_compute[228704]: 2025-11-25 10:01:38.679 228708 DEBUG nova.compute.manager [req-a853da8e-53e7-4155-8e01-d468790134d6 req-4dcaddd6-cb82-42e1-a038-d2af88b4c207 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] No waiting events found dispatching network-vif-plugged-a184d330-a899-40a5-acc3-3af79dd5853e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 05:01:38 np0005534696 nova_compute[228704]: 2025-11-25 10:01:38.679 228708 WARNING nova.compute.manager [req-a853da8e-53e7-4155-8e01-d468790134d6 req-4dcaddd6-cb82-42e1-a038-d2af88b4c207 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Received unexpected event network-vif-plugged-a184d330-a899-40a5-acc3-3af79dd5853e for instance with vm_state active and task_state None.#033[00m
Nov 25 05:01:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:39.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:39 np0005534696 nova_compute[228704]: 2025-11-25 10:01:39.653 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:40 np0005534696 nova_compute[228704]: 2025-11-25 10:01:40.091 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:40.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:41 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:01:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:41.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:01:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:42 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:42Z|00071|binding|INFO|Releasing lport 99717449-3e6b-46ba-a23f-2ebbd1440910 from this chassis (sb_readonly=0)
Nov 25 05:01:42 np0005534696 nova_compute[228704]: 2025-11-25 10:01:42.065 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:42 np0005534696 NetworkManager[48892]: <info>  [1764064902.0662] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 25 05:01:42 np0005534696 NetworkManager[48892]: <info>  [1764064902.0669] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 25 05:01:42 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:42Z|00072|binding|INFO|Releasing lport 99717449-3e6b-46ba-a23f-2ebbd1440910 from this chassis (sb_readonly=0)
Nov 25 05:01:42 np0005534696 nova_compute[228704]: 2025-11-25 10:01:42.095 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:42 np0005534696 nova_compute[228704]: 2025-11-25 10:01:42.098 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:42.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:42 np0005534696 nova_compute[228704]: 2025-11-25 10:01:42.479 228708 DEBUG nova.compute.manager [req-1ee898fe-141a-4843-96c5-76d1d00eab5e req-cdcf9d77-735e-43a5-b558-17c3d8fbf798 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Received event network-changed-a184d330-a899-40a5-acc3-3af79dd5853e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:01:42 np0005534696 nova_compute[228704]: 2025-11-25 10:01:42.479 228708 DEBUG nova.compute.manager [req-1ee898fe-141a-4843-96c5-76d1d00eab5e req-cdcf9d77-735e-43a5-b558-17c3d8fbf798 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Refreshing instance network info cache due to event network-changed-a184d330-a899-40a5-acc3-3af79dd5853e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 05:01:42 np0005534696 nova_compute[228704]: 2025-11-25 10:01:42.480 228708 DEBUG oslo_concurrency.lockutils [req-1ee898fe-141a-4843-96c5-76d1d00eab5e req-cdcf9d77-735e-43a5-b558-17c3d8fbf798 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 05:01:42 np0005534696 nova_compute[228704]: 2025-11-25 10:01:42.480 228708 DEBUG oslo_concurrency.lockutils [req-1ee898fe-141a-4843-96c5-76d1d00eab5e req-cdcf9d77-735e-43a5-b558-17c3d8fbf798 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 05:01:42 np0005534696 nova_compute[228704]: 2025-11-25 10:01:42.480 228708 DEBUG nova.network.neutron [req-1ee898fe-141a-4843-96c5-76d1d00eab5e req-cdcf9d77-735e-43a5-b558-17c3d8fbf798 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Refreshing network info cache for port a184d330-a899-40a5-acc3-3af79dd5853e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 05:01:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:43.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:44.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:44 np0005534696 nova_compute[228704]: 2025-11-25 10:01:44.396 228708 DEBUG nova.network.neutron [req-1ee898fe-141a-4843-96c5-76d1d00eab5e req-cdcf9d77-735e-43a5-b558-17c3d8fbf798 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Updated VIF entry in instance network info cache for port a184d330-a899-40a5-acc3-3af79dd5853e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 05:01:44 np0005534696 nova_compute[228704]: 2025-11-25 10:01:44.397 228708 DEBUG nova.network.neutron [req-1ee898fe-141a-4843-96c5-76d1d00eab5e req-cdcf9d77-735e-43a5-b558-17c3d8fbf798 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Updating instance_info_cache with network_info: [{"id": "a184d330-a899-40a5-acc3-3af79dd5853e", "address": "fa:16:3e:19:49:88", "network": {"id": "dd276fac-68ad-4f7b-84fc-64569c26436e", "bridge": "br-int", "label": "tempest-network-smoke--217339193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa184d330-a8", "ovs_interfaceid": "a184d330-a899-40a5-acc3-3af79dd5853e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 05:01:44 np0005534696 nova_compute[228704]: 2025-11-25 10:01:44.412 228708 DEBUG oslo_concurrency.lockutils [req-1ee898fe-141a-4843-96c5-76d1d00eab5e req-cdcf9d77-735e-43a5-b558-17c3d8fbf798 c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 05:01:44 np0005534696 nova_compute[228704]: 2025-11-25 10:01:44.656 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:45 np0005534696 nova_compute[228704]: 2025-11-25 10:01:45.092 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:45.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:46 np0005534696 podman[237814]: 2025-11-25 10:01:46.333521098 +0000 UTC m=+0.046066860 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 05:01:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:46.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:47.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:01:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:48.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:01:48 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:48Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:49:88 10.100.0.3
Nov 25 05:01:48 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:48Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:49:88 10.100.0.3
Nov 25 05:01:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:49.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:49 np0005534696 nova_compute[228704]: 2025-11-25 10:01:49.659 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:50 np0005534696 nova_compute[228704]: 2025-11-25 10:01:50.094 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:50.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:51.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:52.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:53.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:54.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:54 np0005534696 nova_compute[228704]: 2025-11-25 10:01:54.662 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:54 np0005534696 nova_compute[228704]: 2025-11-25 10:01:54.874 228708 INFO nova.compute.manager [None req-26b6ce68-674a-486e-bd45-8dcdd1c09d9e c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Get console output#033[00m
Nov 25 05:01:54 np0005534696 nova_compute[228704]: 2025-11-25 10:01:54.877 232536 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 05:01:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:55 np0005534696 nova_compute[228704]: 2025-11-25 10:01:55.095 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:55.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:55 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:55Z|00073|binding|INFO|Releasing lport 99717449-3e6b-46ba-a23f-2ebbd1440910 from this chassis (sb_readonly=0)
Nov 25 05:01:55 np0005534696 nova_compute[228704]: 2025-11-25 10:01:55.693 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:55 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:55Z|00074|binding|INFO|Releasing lport 99717449-3e6b-46ba-a23f-2ebbd1440910 from this chassis (sb_readonly=0)
Nov 25 05:01:55 np0005534696 nova_compute[228704]: 2025-11-25 10:01:55.756 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:01:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:56.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:56 np0005534696 nova_compute[228704]: 2025-11-25 10:01:56.879 228708 INFO nova.compute.manager [None req-cf10a328-693f-4910-898f-84e98b32b821 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Get console output#033[00m
Nov 25 05:01:56 np0005534696 nova_compute[228704]: 2025-11-25 10:01:56.882 232536 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 05:01:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:57.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:57 np0005534696 NetworkManager[48892]: <info>  [1764064917.7679] manager: (patch-provnet-378b44dd-6659-420b-83ad-73c68273201a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 25 05:01:57 np0005534696 nova_compute[228704]: 2025-11-25 10:01:57.768 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:57 np0005534696 NetworkManager[48892]: <info>  [1764064917.7701] manager: (patch-br-int-to-provnet-378b44dd-6659-420b-83ad-73c68273201a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 25 05:01:57 np0005534696 nova_compute[228704]: 2025-11-25 10:01:57.827 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:57 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:57Z|00075|binding|INFO|Releasing lport 99717449-3e6b-46ba-a23f-2ebbd1440910 from this chassis (sb_readonly=0)
Nov 25 05:01:57 np0005534696 nova_compute[228704]: 2025-11-25 10:01:57.830 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.035 228708 INFO nova.compute.manager [None req-87101d97-e549-446c-9d74-f5f0652bb559 c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Get console output#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.039 232536 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 25 05:01:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:01:58.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.775 228708 DEBUG nova.compute.manager [req-a0896fd8-f7bd-484f-a656-e3ed88a3af47 req-58489e17-1dc9-4bf8-9e6d-818f56a80a7b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Received event network-changed-a184d330-a899-40a5-acc3-3af79dd5853e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.775 228708 DEBUG nova.compute.manager [req-a0896fd8-f7bd-484f-a656-e3ed88a3af47 req-58489e17-1dc9-4bf8-9e6d-818f56a80a7b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Refreshing instance network info cache due to event network-changed-a184d330-a899-40a5-acc3-3af79dd5853e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.775 228708 DEBUG oslo_concurrency.lockutils [req-a0896fd8-f7bd-484f-a656-e3ed88a3af47 req-58489e17-1dc9-4bf8-9e6d-818f56a80a7b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.775 228708 DEBUG oslo_concurrency.lockutils [req-a0896fd8-f7bd-484f-a656-e3ed88a3af47 req-58489e17-1dc9-4bf8-9e6d-818f56a80a7b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquired lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.775 228708 DEBUG nova.network.neutron [req-a0896fd8-f7bd-484f-a656-e3ed88a3af47 req-58489e17-1dc9-4bf8-9e6d-818f56a80a7b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Refreshing network info cache for port a184d330-a899-40a5-acc3-3af79dd5853e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.860 228708 DEBUG oslo_concurrency.lockutils [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "9f013e5c-902b-4b58-8656-1c3788e671be" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.861 228708 DEBUG oslo_concurrency.lockutils [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.861 228708 DEBUG oslo_concurrency.lockutils [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.861 228708 DEBUG oslo_concurrency.lockutils [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.861 228708 DEBUG oslo_concurrency.lockutils [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.862 228708 INFO nova.compute.manager [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Terminating instance#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.863 228708 DEBUG nova.compute.manager [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 25 05:01:58 np0005534696 kernel: tapa184d330-a8 (unregistering): left promiscuous mode
Nov 25 05:01:58 np0005534696 NetworkManager[48892]: <info>  [1764064918.8980] device (tapa184d330-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.903 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:58 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:58Z|00076|binding|INFO|Releasing lport a184d330-a899-40a5-acc3-3af79dd5853e from this chassis (sb_readonly=0)
Nov 25 05:01:58 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:58Z|00077|binding|INFO|Setting lport a184d330-a899-40a5-acc3-3af79dd5853e down in Southbound
Nov 25 05:01:58 np0005534696 ovn_controller[133535]: 2025-11-25T10:01:58Z|00078|binding|INFO|Removing iface tapa184d330-a8 ovn-installed in OVS
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.907 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:58 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:58.912 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:49:88 10.100.0.3'], port_security=['fa:16:3e:19:49:88 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9f013e5c-902b-4b58-8656-1c3788e671be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dd276fac-68ad-4f7b-84fc-64569c26436e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fc0c386067c7443085ef3a11d7bc772f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b75b8d66-4fb3-472a-a751-9610237a66a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b99af3f9-ef06-4225-90e4-16d8f4ebb7da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>], logical_port=a184d330-a899-40a5-acc3-3af79dd5853e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7facf8b10700>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 05:01:58 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:58.913 142676 INFO neutron.agent.ovn.metadata.agent [-] Port a184d330-a899-40a5-acc3-3af79dd5853e in datapath dd276fac-68ad-4f7b-84fc-64569c26436e unbound from our chassis#033[00m
Nov 25 05:01:58 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:58.914 142676 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dd276fac-68ad-4f7b-84fc-64569c26436e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 25 05:01:58 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:58.915 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[b72a5ddd-2411-4c05-9f47-a9c5cf9c14c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:58 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:58.916 142676 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e namespace which is not needed anymore#033[00m
Nov 25 05:01:58 np0005534696 nova_compute[228704]: 2025-11-25 10:01:58.923 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:58 np0005534696 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 25 05:01:58 np0005534696 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000d.scope: Consumed 11.525s CPU time.
Nov 25 05:01:58 np0005534696 systemd-machined[192760]: Machine qemu-4-instance-0000000d terminated.
Nov 25 05:01:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:58 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:01:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:58 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:01:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:58 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:01:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:01:58 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:01:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:01:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:01:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:01:59 np0005534696 neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e[237701]: [NOTICE]   (237705) : haproxy version is 2.8.14-c23fe91
Nov 25 05:01:59 np0005534696 neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e[237701]: [NOTICE]   (237705) : path to executable is /usr/sbin/haproxy
Nov 25 05:01:59 np0005534696 neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e[237701]: [WARNING]  (237705) : Exiting Master process...
Nov 25 05:01:59 np0005534696 neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e[237701]: [ALERT]    (237705) : Current worker (237707) exited with code 143 (Terminated)
Nov 25 05:01:59 np0005534696 neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e[237701]: [WARNING]  (237705) : All workers exited. Exiting... (0)
Nov 25 05:01:59 np0005534696 systemd[1]: libpod-13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46.scope: Deactivated successfully.
Nov 25 05:01:59 np0005534696 podman[237868]: 2025-11-25 10:01:59.018685634 +0000 UTC m=+0.036688586 container died 13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 05:01:59 np0005534696 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46-userdata-shm.mount: Deactivated successfully.
Nov 25 05:01:59 np0005534696 systemd[1]: var-lib-containers-storage-overlay-7592523b321786da5b476426e6e65005e719ef0f7bc77ff843f39c92aaf304dc-merged.mount: Deactivated successfully.
Nov 25 05:01:59 np0005534696 podman[237868]: 2025-11-25 10:01:59.042940987 +0000 UTC m=+0.060943939 container cleanup 13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 05:01:59 np0005534696 systemd[1]: libpod-conmon-13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46.scope: Deactivated successfully.
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.086 228708 INFO nova.virt.libvirt.driver [-] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Instance destroyed successfully.#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.088 228708 DEBUG nova.objects.instance [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lazy-loading 'resources' on Instance uuid 9f013e5c-902b-4b58-8656-1c3788e671be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 25 05:01:59 np0005534696 podman[237891]: 2025-11-25 10:01:59.093890067 +0000 UTC m=+0.035207094 container remove 13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 05:01:59 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:59.098 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[35621f56-8e8f-48cb-8300-0f3cce1b082b]: (4, ('Tue Nov 25 10:01:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e (13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46)\n13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46\nTue Nov 25 10:01:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e (13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46)\n13f817edd52bcb4b38ea418466e2d0d36d53f7b979722a38425d15bce549eb46\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:59 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:59.100 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[39eaed00-9c01-4fa9-8ea6-f17241c3888a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:59 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:59.101 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd276fac-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.102 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:59 np0005534696 kernel: tapdd276fac-60: left promiscuous mode
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.105 228708 DEBUG nova.virt.libvirt.vif [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T10:01:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1639333252',display_name='tempest-TestNetworkBasicOps-server-1639333252',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1639333252',id=13,image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELe2XyeB3Hp0jVOZZweS/eo3OV1PQJW5T4XALveJ001xNixVkH2jUpCWQNq58pd4qk5U0SvW8D83cOaXuPyEddHvS7Y/4XXY4odokYWz9B9aIfKLDw7+EQMYoe8Tkc9zg==',key_name='tempest-TestNetworkBasicOps-613762435',keypairs=<?>,launch_index=0,launched_at=2025-11-25T10:01:37Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fc0c386067c7443085ef3a11d7bc772f',ramdisk_id='',reservation_id='r-md2iio3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ddd1b7-1bba-493e-a10f-b03a12ab3457',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-804701909',owner_user_name='tempest-TestNetworkBasicOps-804701909-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T10:01:37Z,user_data=None,user_id='c92fada0e9fc4e9482d24b33b311d806',uuid=9f013e5c-902b-4b58-8656-1c3788e671be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a184d330-a899-40a5-acc3-3af79dd5853e", "address": "fa:16:3e:19:49:88", "network": {"id": "dd276fac-68ad-4f7b-84fc-64569c26436e", "bridge": "br-int", "label": "tempest-network-smoke--217339193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa184d330-a8", "ovs_interfaceid": "a184d330-a899-40a5-acc3-3af79dd5853e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.106 228708 DEBUG nova.network.os_vif_util [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converting VIF {"id": "a184d330-a899-40a5-acc3-3af79dd5853e", "address": "fa:16:3e:19:49:88", "network": {"id": "dd276fac-68ad-4f7b-84fc-64569c26436e", "bridge": "br-int", "label": "tempest-network-smoke--217339193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa184d330-a8", "ovs_interfaceid": "a184d330-a899-40a5-acc3-3af79dd5853e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.107 228708 DEBUG nova.network.os_vif_util [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:49:88,bridge_name='br-int',has_traffic_filtering=True,id=a184d330-a899-40a5-acc3-3af79dd5853e,network=Network(dd276fac-68ad-4f7b-84fc-64569c26436e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa184d330-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.108 228708 DEBUG os_vif [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:49:88,bridge_name='br-int',has_traffic_filtering=True,id=a184d330-a899-40a5-acc3-3af79dd5853e,network=Network(dd276fac-68ad-4f7b-84fc-64569c26436e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa184d330-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.109 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.109 228708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa184d330-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.112 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.119 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.120 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:01:59 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:59.121 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[5fbc2bba-c2f2-439c-ae8e-8b3df0ffe871]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.122 228708 INFO os_vif [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:49:88,bridge_name='br-int',has_traffic_filtering=True,id=a184d330-a899-40a5-acc3-3af79dd5853e,network=Network(dd276fac-68ad-4f7b-84fc-64569c26436e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa184d330-a8')#033[00m
Nov 25 05:01:59 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:59.129 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[7dce2bb7-27e0-48e8-8352-4df77520ba0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:59 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:59.130 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[59b6352b-94b5-4d55-bb5f-f4bcf179291e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:59 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:59.141 232274 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9721a1-e9d2-4f8d-a800-c489016b9ff5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 367739, 'reachable_time': 33970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237929, 'error': None, 'target': 'ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:59 np0005534696 systemd[1]: run-netns-ovnmeta\x2ddd276fac\x2d68ad\x2d4f7b\x2d84fc\x2d64569c26436e.mount: Deactivated successfully.
Nov 25 05:01:59 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:59.144 142787 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dd276fac-68ad-4f7b-84fc-64569c26436e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 25 05:01:59 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:01:59.144 142787 DEBUG oslo.privsep.daemon [-] privsep: reply[66e554c7-571a-42db-b828-84cd8944ccc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.270 228708 INFO nova.virt.libvirt.driver [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Deleting instance files /var/lib/nova/instances/9f013e5c-902b-4b58-8656-1c3788e671be_del#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.271 228708 INFO nova.virt.libvirt.driver [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Deletion of /var/lib/nova/instances/9f013e5c-902b-4b58-8656-1c3788e671be_del complete#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.324 228708 INFO nova.compute.manager [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Took 0.46 seconds to destroy the instance on the hypervisor.#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.324 228708 DEBUG oslo.service.loopingcall [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.326 228708 DEBUG nova.compute.manager [-] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.326 228708 DEBUG nova.network.neutron [-] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 25 05:01:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:01:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:01:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:01:59.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.741 228708 DEBUG nova.compute.manager [req-dbb04531-ea97-4ea0-b92e-8b9f98521125 req-4345af9b-ea53-46f8-b0ca-d480a344e41b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Received event network-vif-unplugged-a184d330-a899-40a5-acc3-3af79dd5853e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.742 228708 DEBUG oslo_concurrency.lockutils [req-dbb04531-ea97-4ea0-b92e-8b9f98521125 req-4345af9b-ea53-46f8-b0ca-d480a344e41b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.742 228708 DEBUG oslo_concurrency.lockutils [req-dbb04531-ea97-4ea0-b92e-8b9f98521125 req-4345af9b-ea53-46f8-b0ca-d480a344e41b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.742 228708 DEBUG oslo_concurrency.lockutils [req-dbb04531-ea97-4ea0-b92e-8b9f98521125 req-4345af9b-ea53-46f8-b0ca-d480a344e41b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.743 228708 DEBUG nova.compute.manager [req-dbb04531-ea97-4ea0-b92e-8b9f98521125 req-4345af9b-ea53-46f8-b0ca-d480a344e41b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] No waiting events found dispatching network-vif-unplugged-a184d330-a899-40a5-acc3-3af79dd5853e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 05:01:59 np0005534696 nova_compute[228704]: 2025-11-25 10:01:59.743 228708 DEBUG nova.compute.manager [req-dbb04531-ea97-4ea0-b92e-8b9f98521125 req-4345af9b-ea53-46f8-b0ca-d480a344e41b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Received event network-vif-unplugged-a184d330-a899-40a5-acc3-3af79dd5853e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 25 05:02:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.097 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:00.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.451 228708 DEBUG nova.network.neutron [-] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.462 228708 INFO nova.compute.manager [-] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Took 1.14 seconds to deallocate network for instance.#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.499 228708 DEBUG oslo_concurrency.lockutils [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.499 228708 DEBUG oslo_concurrency.lockutils [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.538 228708 DEBUG oslo_concurrency.processutils [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.570 228708 DEBUG nova.network.neutron [req-a0896fd8-f7bd-484f-a656-e3ed88a3af47 req-58489e17-1dc9-4bf8-9e6d-818f56a80a7b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Updated VIF entry in instance network info cache for port a184d330-a899-40a5-acc3-3af79dd5853e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.571 228708 DEBUG nova.network.neutron [req-a0896fd8-f7bd-484f-a656-e3ed88a3af47 req-58489e17-1dc9-4bf8-9e6d-818f56a80a7b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Updating instance_info_cache with network_info: [{"id": "a184d330-a899-40a5-acc3-3af79dd5853e", "address": "fa:16:3e:19:49:88", "network": {"id": "dd276fac-68ad-4f7b-84fc-64569c26436e", "bridge": "br-int", "label": "tempest-network-smoke--217339193", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fc0c386067c7443085ef3a11d7bc772f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa184d330-a8", "ovs_interfaceid": "a184d330-a899-40a5-acc3-3af79dd5853e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.582 228708 DEBUG oslo_concurrency.lockutils [req-a0896fd8-f7bd-484f-a656-e3ed88a3af47 req-58489e17-1dc9-4bf8-9e6d-818f56a80a7b c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Releasing lock "refresh_cache-9f013e5c-902b-4b58-8656-1c3788e671be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.846 228708 DEBUG nova.compute.manager [req-d4c93794-a9f9-4b58-8faf-eb09c20509f2 req-c92b90d3-8c3b-4db5-a177-e5909e84438f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Received event network-vif-deleted-a184d330-a899-40a5-acc3-3af79dd5853e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.847 228708 INFO nova.compute.manager [req-d4c93794-a9f9-4b58-8faf-eb09c20509f2 req-c92b90d3-8c3b-4db5-a177-e5909e84438f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Neutron deleted interface a184d330-a899-40a5-acc3-3af79dd5853e; detaching it from the instance and deleting it from the info cache#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.847 228708 DEBUG nova.network.neutron [req-d4c93794-a9f9-4b58-8faf-eb09c20509f2 req-c92b90d3-8c3b-4db5-a177-e5909e84438f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.863 228708 DEBUG nova.compute.manager [req-d4c93794-a9f9-4b58-8faf-eb09c20509f2 req-c92b90d3-8c3b-4db5-a177-e5909e84438f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Detach interface failed, port_id=a184d330-a899-40a5-acc3-3af79dd5853e, reason: Instance 9f013e5c-902b-4b58-8656-1c3788e671be could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 25 05:02:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:02:00 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1795209543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.882 228708 DEBUG oslo_concurrency.processutils [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.886 228708 DEBUG nova.compute.provider_tree [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.895 228708 DEBUG nova.scheduler.client.report [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.908 228708 DEBUG oslo_concurrency.lockutils [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:02:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.931 228708 INFO nova.scheduler.client.report [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Deleted allocations for instance 9f013e5c-902b-4b58-8656-1c3788e671be#033[00m
Nov 25 05:02:00 np0005534696 nova_compute[228704]: 2025-11-25 10:02:00.968 228708 DEBUG oslo_concurrency.lockutils [None req-75e0112d-d8aa-460d-bdd1-115a397c052c c92fada0e9fc4e9482d24b33b311d806 fc0c386067c7443085ef3a11d7bc772f - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:02:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:01 np0005534696 nova_compute[228704]: 2025-11-25 10:02:01.807 228708 DEBUG nova.compute.manager [req-f68306c5-090d-4153-a4ff-b61797d6ffcb req-750a52df-2fb8-435e-b967-b4f4c63a978f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Received event network-vif-plugged-a184d330-a899-40a5-acc3-3af79dd5853e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 25 05:02:01 np0005534696 nova_compute[228704]: 2025-11-25 10:02:01.807 228708 DEBUG oslo_concurrency.lockutils [req-f68306c5-090d-4153-a4ff-b61797d6ffcb req-750a52df-2fb8-435e-b967-b4f4c63a978f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Acquiring lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:02:01 np0005534696 nova_compute[228704]: 2025-11-25 10:02:01.807 228708 DEBUG oslo_concurrency.lockutils [req-f68306c5-090d-4153-a4ff-b61797d6ffcb req-750a52df-2fb8-435e-b967-b4f4c63a978f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:02:01 np0005534696 nova_compute[228704]: 2025-11-25 10:02:01.807 228708 DEBUG oslo_concurrency.lockutils [req-f68306c5-090d-4153-a4ff-b61797d6ffcb req-750a52df-2fb8-435e-b967-b4f4c63a978f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] Lock "9f013e5c-902b-4b58-8656-1c3788e671be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:02:01 np0005534696 nova_compute[228704]: 2025-11-25 10:02:01.807 228708 DEBUG nova.compute.manager [req-f68306c5-090d-4153-a4ff-b61797d6ffcb req-750a52df-2fb8-435e-b967-b4f4c63a978f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] No waiting events found dispatching network-vif-plugged-a184d330-a899-40a5-acc3-3af79dd5853e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 25 05:02:01 np0005534696 nova_compute[228704]: 2025-11-25 10:02:01.808 228708 WARNING nova.compute.manager [req-f68306c5-090d-4153-a4ff-b61797d6ffcb req-750a52df-2fb8-435e-b967-b4f4c63a978f c59b1f6b95e648d2a462352707b70363 4baca3d790ca43f6974e72974114257e - - default default] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Received unexpected event network-vif-plugged-a184d330-a899-40a5-acc3-3af79dd5853e for instance with vm_state deleted and task_state None.#033[00m
Nov 25 05:02:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:02 np0005534696 podman[237960]: 2025-11-25 10:02:02.334300915 +0000 UTC m=+0.039104980 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 05:02:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:02.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:02 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:02 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:02 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:03 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:03.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:04 np0005534696 nova_compute[228704]: 2025-11-25 10:02:04.110 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:04.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:04 np0005534696 nova_compute[228704]: 2025-11-25 10:02:04.484 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:04 np0005534696 nova_compute[228704]: 2025-11-25 10:02:04.568 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:05 np0005534696 nova_compute[228704]: 2025-11-25 10:02:05.098 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:02:05.356 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:02:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:02:05.356 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:02:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:02:05.357 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:02:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:05.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:06.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:07.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:07 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:07 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:07 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:08 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:08.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:09 np0005534696 nova_compute[228704]: 2025-11-25 10:02:09.111 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:09 np0005534696 podman[238009]: 2025-11-25 10:02:09.347183657 +0000 UTC m=+0.055640473 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 05:02:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:09.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:10 np0005534696 nova_compute[228704]: 2025-11-25 10:02:10.100 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:10.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.622707) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930622742, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2364, "num_deletes": 251, "total_data_size": 6062255, "memory_usage": 6160456, "flush_reason": "Manual Compaction"}
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930631668, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3937847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26261, "largest_seqno": 28619, "table_properties": {"data_size": 3928612, "index_size": 5729, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19664, "raw_average_key_size": 20, "raw_value_size": 3909867, "raw_average_value_size": 4030, "num_data_blocks": 252, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064721, "oldest_key_time": 1764064721, "file_creation_time": 1764064930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 8983 microseconds, and 6146 cpu microseconds.
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.631692) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3937847 bytes OK
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.631703) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.632231) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.632242) EVENT_LOG_v1 {"time_micros": 1764064930632239, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.632252) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6051864, prev total WAL file size 6051864, number of live WAL files 2.
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.633082) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3845KB)], [51(11MB)]
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930633110, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16019133, "oldest_snapshot_seqno": -1}
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5853 keys, 13905913 bytes, temperature: kUnknown
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930662248, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 13905913, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13866632, "index_size": 23555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148610, "raw_average_key_size": 25, "raw_value_size": 13760959, "raw_average_value_size": 2351, "num_data_blocks": 959, "num_entries": 5853, "num_filter_entries": 5853, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764064930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.662402) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 13905913 bytes
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.662946) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 549.1 rd, 476.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.5 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 6370, records dropped: 517 output_compression: NoCompression
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.662962) EVENT_LOG_v1 {"time_micros": 1764064930662954, "job": 30, "event": "compaction_finished", "compaction_time_micros": 29171, "compaction_time_cpu_micros": 21143, "output_level": 6, "num_output_files": 1, "total_output_size": 13905913, "num_input_records": 6370, "num_output_records": 5853, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930664155, "job": 30, "event": "table_file_deletion", "file_number": 53}
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764064930665875, "job": 30, "event": "table_file_deletion", "file_number": 51}
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.633041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.665966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.665970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.665971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.665972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:02:10.665974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:02:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:11.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:12.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:12 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:12 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:12 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:13.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:14 np0005534696 nova_compute[228704]: 2025-11-25 10:02:14.086 228708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764064919.0855281, 9f013e5c-902b-4b58-8656-1c3788e671be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 25 05:02:14 np0005534696 nova_compute[228704]: 2025-11-25 10:02:14.087 228708 INFO nova.compute.manager [-] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] VM Stopped (Lifecycle Event)#033[00m
Nov 25 05:02:14 np0005534696 nova_compute[228704]: 2025-11-25 10:02:14.110 228708 DEBUG nova.compute.manager [None req-6679845a-dbb0-4cd4-9104-6c0fd95ed0c8 - - - - - -] [instance: 9f013e5c-902b-4b58-8656-1c3788e671be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 25 05:02:14 np0005534696 nova_compute[228704]: 2025-11-25 10:02:14.111 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:14.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:15 np0005534696 nova_compute[228704]: 2025-11-25 10:02:15.101 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:15.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:16.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:17 np0005534696 podman[238041]: 2025-11-25 10:02:17.32741598 +0000 UTC m=+0.039849313 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 05:02:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:17.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:18 np0005534696 nova_compute[228704]: 2025-11-25 10:02:18.351 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:02:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:02:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:18.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:02:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:19 np0005534696 nova_compute[228704]: 2025-11-25 10:02:19.112 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:19.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:20 np0005534696 nova_compute[228704]: 2025-11-25 10:02:20.101 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:02:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:20.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:02:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:21.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:02:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:22.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:02:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.373 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.373 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.373 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.373 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.373 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:02:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:02:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:23.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:02:23 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:02:23 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3580303334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.713 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.912 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.913 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4928MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.913 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.913 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.980 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.980 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 05:02:23 np0005534696 nova_compute[228704]: 2025-11-25 10:02:23.994 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:02:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:24 np0005534696 nova_compute[228704]: 2025-11-25 10:02:24.113 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:24 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:02:24 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/364596767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:02:24 np0005534696 nova_compute[228704]: 2025-11-25 10:02:24.334 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:02:24 np0005534696 nova_compute[228704]: 2025-11-25 10:02:24.338 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:02:24 np0005534696 nova_compute[228704]: 2025-11-25 10:02:24.355 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:02:24 np0005534696 nova_compute[228704]: 2025-11-25 10:02:24.367 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 05:02:24 np0005534696 nova_compute[228704]: 2025-11-25 10:02:24.368 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:02:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:24.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:25 np0005534696 nova_compute[228704]: 2025-11-25 10:02:25.103 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:25 np0005534696 nova_compute[228704]: 2025-11-25 10:02:25.368 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:02:25 np0005534696 nova_compute[228704]: 2025-11-25 10:02:25.368 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:02:25 np0005534696 nova_compute[228704]: 2025-11-25 10:02:25.368 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:02:25 np0005534696 nova_compute[228704]: 2025-11-25 10:02:25.369 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 05:02:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:25.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:26.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:27.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:28 np0005534696 nova_compute[228704]: 2025-11-25 10:02:28.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:02:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:28.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 05:02:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:02:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:02:28 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 05:02:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:29 np0005534696 nova_compute[228704]: 2025-11-25 10:02:29.115 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:29 np0005534696 nova_compute[228704]: 2025-11-25 10:02:29.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:02:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:29.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:30 np0005534696 nova_compute[228704]: 2025-11-25 10:02:30.104 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:30.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:31 np0005534696 nova_compute[228704]: 2025-11-25 10:02:31.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:02:31 np0005534696 nova_compute[228704]: 2025-11-25 10:02:31.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 05:02:31 np0005534696 nova_compute[228704]: 2025-11-25 10:02:31.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 05:02:31 np0005534696 nova_compute[228704]: 2025-11-25 10:02:31.370 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 05:02:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:31.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:32.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:32 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:02:32 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:02:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:33 np0005534696 podman[238247]: 2025-11-25 10:02:33.332101887 +0000 UTC m=+0.035885102 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 05:02:33 np0005534696 nova_compute[228704]: 2025-11-25 10:02:33.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:02:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:33.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:34 np0005534696 nova_compute[228704]: 2025-11-25 10:02:34.117 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:34 np0005534696 nova_compute[228704]: 2025-11-25 10:02:34.352 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:02:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:34.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:34 np0005534696 ovn_controller[133535]: 2025-11-25T10:02:34Z|00079|memory_trim|INFO|Detected inactivity (last active 30025 ms ago): trimming memory
Nov 25 05:02:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:35 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:35 np0005534696 nova_compute[228704]: 2025-11-25 10:02:35.106 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:35.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:36.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:37.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:38.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:39 np0005534696 nova_compute[228704]: 2025-11-25 10:02:39.117 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:39.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:40 np0005534696 nova_compute[228704]: 2025-11-25 10:02:40.108 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:40 np0005534696 podman[238270]: 2025-11-25 10:02:40.341195558 +0000 UTC m=+0.053592745 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 05:02:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:40.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:41.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:42.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:43.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:44 np0005534696 nova_compute[228704]: 2025-11-25 10:02:44.119 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:02:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:44.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:02:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:45 np0005534696 nova_compute[228704]: 2025-11-25 10:02:45.110 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:45.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:45 np0005534696 systemd-logind[744]: New session 54 of user zuul.
Nov 25 05:02:45 np0005534696 systemd[1]: Started Session 54 of User zuul.
Nov 25 05:02:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:46.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:47.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:48 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 25 05:02:48 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4229107572' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 05:02:48 np0005534696 podman[238538]: 2025-11-25 10:02:48.334195054 +0000 UTC m=+0.043342106 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 05:02:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:48.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:49 np0005534696 nova_compute[228704]: 2025-11-25 10:02:49.121 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:49.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:50 np0005534696 nova_compute[228704]: 2025-11-25 10:02:50.111 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:50.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:50 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 05:02:50 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5580 writes, 29K keys, 5580 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s#012Cumulative WAL: 5580 writes, 5580 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1522 writes, 7291 keys, 1522 commit groups, 1.0 writes per commit group, ingest: 16.64 MB, 0.03 MB/s#012Interval WAL: 1522 writes, 1522 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    410.0      0.10              0.07        15    0.007       0      0       0.0       0.0#012  L6      1/0   13.26 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    500.7    427.9      0.41              0.28        14    0.029     72K   7352       0.0       0.0#012 Sum      1/0   13.26 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    398.3    424.2      0.51              0.35        29    0.018     72K   7352       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0    410.7    418.4      0.18              0.13        10    0.018     30K   2534       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    500.7    427.9      0.41              0.28        14    0.029     72K   7352       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    413.2      0.10              0.07        14    0.007       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.042, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.21 GB write, 0.12 MB/s write, 0.20 GB read, 0.11 MB/s read, 0.5 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56183fd2f350#2 capacity: 304.00 MB usage: 18.14 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000115 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(930,17.57 MB,5.77905%) FilterBlock(29,214.36 KB,0.0688603%) IndexBlock(29,373.06 KB,0.119842%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 25 05:02:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:51.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:02:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:52.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:02:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:53.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:54 np0005534696 nova_compute[228704]: 2025-11-25 10:02:54.122 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:54.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:54 np0005534696 ovs-vsctl[238686]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 05:02:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:02:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:02:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:02:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:02:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:55 np0005534696 nova_compute[228704]: 2025-11-25 10:02:55.110 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:02:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:55.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:02:55 np0005534696 virtqemud[228342]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 05:02:55 np0005534696 virtqemud[228342]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 05:02:55 np0005534696 virtqemud[228342]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 05:02:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:02:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:56 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: cache status {prefix=cache status} (starting...)
Nov 25 05:02:56 np0005534696 lvm[238982]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 05:02:56 np0005534696 lvm[238982]: VG ceph_vg0 finished
Nov 25 05:02:56 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: client ls {prefix=client ls} (starting...)
Nov 25 05:02:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:56.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:56 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 05:02:56 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Nov 25 05:02:56 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2345564020' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 25 05:02:56 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 05:02:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:57 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 05:02:57 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 05:02:57 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2036901220' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 05:02:57 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 05:02:57 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 05:02:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:57.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:57 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 05:02:57 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 05:02:57 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 05:02:57 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 25 05:02:57 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1589163821' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 05:02:57 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: ops {prefix=ops} (starting...)
Nov 25 05:02:58 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 25 05:02:58 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3224074830' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 05:02:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:58 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 05:02:58 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/932111885' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 05:02:58 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 25 05:02:58 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421500948' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 05:02:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:02:58.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:58 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: session ls {prefix=session ls} (starting...)
Nov 25 05:02:58 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: status {prefix=status} (starting...)
Nov 25 05:02:58 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 05:02:58 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/763932686' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 05:02:58 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 25 05:02:58 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2480608717' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 05:02:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:02:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:02:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:02:59 np0005534696 nova_compute[228704]: 2025-11-25 10:02:59.123 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:02:59 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 25 05:02:59 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/288058564' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 05:02:59 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 05:02:59 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2077170816' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 05:02:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:02:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:02:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:02:59.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:02:59 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 25 05:02:59 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/356955100' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 25 05:03:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:02:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 05:03:00 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945648626' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 05:03:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 05:03:00 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3469210907' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 05:03:00 np0005534696 nova_compute[228704]: 2025-11-25 10:03:00.112 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 25 05:03:00 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1131750807' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 05:03:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:00.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 05:03:00 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/648257444' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 05:03:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 05:03:00 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/816679850' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 05:03:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:01 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 05:03:01 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/216371832' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 05:03:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:01.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 66985984 unmapped: 1130496 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18(unlocked)] enter Initial
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=0 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000051 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=0 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000023
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000127 1 0.000041
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000027 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000182 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8(unlocked)] enter Initial
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=0 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000081 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=0 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000023
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000221 1 0.000091
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 75 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000094 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000338 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 75 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 75 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.837508 2 0.000125
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.838115 2 0.000064
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.837870 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.838315 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.837926 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.838342 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000056 1 0.000076
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000158 1 0.000189
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 76 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1114112 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.11 deep-scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.11 deep-scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=42'42 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.414968 57 0.000127
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.423007 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 24.407386 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 24.407643 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=42'42 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=8.583978653s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 active pruub 198.947128296s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=8.583838463s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.947128296s@ mbc={}] exit Reset 0.000172 1 0.000623
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=8.583838463s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.947128296s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=8.583838463s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.947128296s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=8.583838463s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.947128296s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=8.583838463s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.947128296s@ mbc={}] exit Start 0.000306 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77 pruub=8.583838463s) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.947128296s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.18( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.012511 6 0.000029
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.18( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.18( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.8( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.012451 6 0.000024
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.8( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.8( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9(unlocked)] enter Initial
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=0 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000042 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=0 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000022
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000072 1 0.000026
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000028 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000110 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 77 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19(unlocked)] enter Initial
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=0 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000032 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=0 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000017
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000071 1 0.000040
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000115 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.18( v 42'1151 lc 35'114 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006100 3 0.000123
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.18( v 42'1151 lc 35'114 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.18( v 42'1151 lc 35'114 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000041 1 0.000028
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.18( v 42'1151 lc 35'114 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.029010 1 0.000036
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.8( v 42'1151 lc 35'157 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.035230 3 0.000107
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.8( v 42'1151 lc 35'157 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.8( v 42'1151 lc 35'157 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000061 1 0.000020
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.8( v 42'1151 lc 35'157 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 860160 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 647595 data_alloc: 218103808 data_used: 8192
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052754 1 0.000045
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 77 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 77 ms_handle_reset con 0x561f91725400 session 0x561f93724780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.006608 2 0.000045
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.006825 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.006855 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000137 1 0.000323
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000042 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.919344 1 0.000016
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.007692 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.020180 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000079 1 0.000372
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000043 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.972855 1 0.000018
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.008080 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.020625 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.007172 2 0.000050
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.007305 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000084 1 0.000119
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.007329 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=77) [2] r=0 lpr=77 pi=[51,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000060 1 0.000084
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000874 2 0.000146
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000876 2 0.000283
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=44
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=44
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000491 2 0.000089
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000409 2 0.000049
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.010653 7 0.000417
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000099 1 0.000092
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] lb MIN local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 DELETING pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.001148 1 0.000046
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] lb MIN local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.001288 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 78 pg[6.9( v 42'42 (0'0,42'42] lb MIN local-lis/les=57/58 n=1 ec=49/17 lis/c=57/57 les/c/f=58/58/0 sis=77) [1] r=-1 lpr=77 pi=[57,77)/1 crt=42'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.012302 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 761856 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.932126045s of 10.009494781s, submitted: 65
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 78 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003521 2 0.000116
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005010 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004377 2 0.000112
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005742 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=76/77 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.9( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.006705 6 0.000214
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.9( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.9( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.19( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.006932 6 0.000030
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.19( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.19( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=51/51 les/c/f=52/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002334 4 0.000173
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.8( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001825 4 0.000134
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.18( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.9( v 42'1151 lc 35'475 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003559 3 0.000254
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.9( v 42'1151 lc 35'475 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.9( v 42'1151 lc 35'475 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000083 1 0.000089
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.9( v 42'1151 lc 35'475 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035812 1 0.000042
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.19( v 42'1151 lc 35'172 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.038721 3 0.000136
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.19( v 42'1151 lc 35'172 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.19( v 42'1151 lc 35'172 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000098 1 0.000057
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.19( v 42'1151 lc 35'172 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 524288 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052663 1 0.000064
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 79 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0d8000/0x0/0x4ffc00000, data 0x81b0c/0xf3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x1a2f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.007132 1 0.000049
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.046719 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.053521 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000060 1 0.000102
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000034
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.955528 1 0.000026
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.047095 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.054056 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[51,78)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000044 1 0.000074
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000022 1 0.000026
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=36
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=36
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001411 3 0.000037
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=44
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=44
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000575 3 0.000025
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 80 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 475136 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0d1000/0x0/0x4ffc00000, data 0x83c32/0xf9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x1a2f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005617 2 0.000132
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007147 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006123 2 0.000069
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006765 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=78/79 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=6 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=6 ec=51/29 lis/c=80/51 les/c/f=81/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001251 3 0.000177
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=6 ec=51/29 lis/c=80/51 les/c/f=81/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=6 ec=51/29 lis/c=80/51 les/c/f=81/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.9( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=6 ec=51/29 lis/c=80/51 les/c/f=81/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=78/51 les/c/f=79/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/51 les/c/f=81/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001401 3 0.000115
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/51 les/c/f=81/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/51 les/c/f=81/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 81 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/51 les/c/f=81/52/0 sis=80) [2] r=0 lpr=80 pi=[51,80)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 507904 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 81 handle_osd_map epochs [81,81], i have 81, src has [1,81]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.4 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.4 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 458752 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 680345 data_alloc: 218103808 data_used: 8192
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 434176 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.13 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.13 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 385024 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 82 heartbeat osd_stat(store_statfs(0x4fe0c8000/0x0/0x4ffc00000, data 0x89e27/0x102000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x1a2f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 82 handle_osd_map epochs [83,83], i have 83, src has [1,83]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 376832 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 368640 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 344064 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 696807 data_alloc: 218103808 data_used: 8192
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 303104 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.953958511s of 10.021793365s, submitted: 63
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 262144 heap: 68116480 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 20.420928 48 0.000239
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 20.426254 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 21.433276 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 21.433299 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.579153061s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 active pruub 214.059860229s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.579126358s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059860229s@ mbc={}] exit Reset 0.000052 1 0.000096
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.579126358s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059860229s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.579126358s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059860229s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.579126358s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059860229s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.579126358s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059860229s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.579126358s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059860229s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 20.421825 48 0.000519
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 20.427014 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 21.433283 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 21.433412 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.578181267s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 active pruub 214.059829712s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.578165054s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059829712s@ mbc={}] exit Reset 0.000031 1 0.000058
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.578165054s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059829712s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.578165054s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059829712s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.578165054s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059829712s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.578165054s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059829712s@ mbc={}] exit Start 0.000005 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 87 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87 pruub=11.578165054s) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 214.059829712s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 87 handle_osd_map epochs [87,87], i have 87, src has [1,87]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fcf1d000/0x0/0x4ffc00000, data 0x92349/0x10e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2bcf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1286144 heap: 69165056 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.018158 3 0.000031
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.018362 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.019098 3 0.000028
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.019308 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=87) [0] r=-1 lpr=87 pi=[70,87)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000271 1 0.000507
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000283 1 0.000506
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000179 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000107 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000443
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000067 1 0.000442
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000077 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000043 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000022 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 88 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1277952 heap: 69165056 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 88 handle_osd_map epochs [88,89], i have 89, src has [1,89]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000526 4 0.000270
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001205 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000743 4 0.000286
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001781 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.001899 5 0.001192
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000200 1 0.000027
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000410 1 0.000016
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.002809 5 0.000864
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035392 2 0.000054
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.035229 1 0.000015
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000347 1 0.000022
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059594 2 0.000037
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 89 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.5 deep-scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.5 deep-scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 98304 heap: 69165056 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 719028 data_alloc: 218103808 data_used: 24576
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.914274 1 0.000146
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 1.012677 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 2.014567 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 2.015017 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.974860 1 0.000054
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 1.013672 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 2.014939 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.989828110s) [0] async=[0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 42'1151 active pruub 220.505386353s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.989717484s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.505386353s@ mbc={}] exit Reset 0.000158 1 0.000726
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 2.015203 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.989717484s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.505386353s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[70,88)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.988760948s) [0] async=[0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 42'1151 active pruub 220.504470825s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.988720894s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.504470825s@ mbc={}] exit Reset 0.000066 1 0.000280
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.988720894s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.504470825s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.988720894s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.504470825s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.988720894s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.504470825s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.988720894s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.504470825s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.988720894s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.504470825s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.989717484s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.505386353s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.989717484s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.505386353s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.989717484s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.505386353s@ mbc={}] exit Start 0.000482 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 90 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90 pruub=14.989717484s) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 220.505386353s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 90 heartbeat osd_stat(store_statfs(0x4fcf12000/0x0/0x4ffc00000, data 0x98897/0x117000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2bcf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 81920 heap: 69165056 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006931 7 0.000058
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006529 7 0.000569
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000045 1 0.000047
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000076 1 0.000134
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 DELETING pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.046048 2 0.000199
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046170 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.1d( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=88/89 n=5 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.053141 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 DELETING pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.104877 2 0.000250
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.105163 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 91 pg[9.d( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=88/89 n=6 ec=51/29 lis/c=88/70 les/c/f=89/71/0 sis=90) [0] r=-1 lpr=90 pi=[70,90)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.112294 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.8 deep-scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.8 deep-scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 73728 heap: 69165056 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1024000 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.9 deep-scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.9 deep-scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1024000 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69189632 unmapped: 1024000 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 706369 data_alloc: 218103808 data_used: 12288
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fcf12000/0x0/0x4ffc00000, data 0x9c6bb/0x11a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2bcf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 958464 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.040899277s of 10.097165108s, submitted: 56
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 958464 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 38.533961 84 0.000290
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 38.541829 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 39.554543 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 39.554597 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466200829s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 active pruub 222.015991211s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466086388s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.015991211s@ mbc={}] exit Reset 0.000209 1 0.000236
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466086388s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.015991211s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466086388s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.015991211s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466086388s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.015991211s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 38.534103 84 0.000712
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 38.541772 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 39.554433 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466086388s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.015991211s@ mbc={}] exit Start 0.000090 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466086388s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.015991211s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 39.554452 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466098785s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 active pruub 222.016296387s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466066360s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.016296387s@ mbc={}] exit Reset 0.000063 1 0.000142
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466066360s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.016296387s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466066360s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.016296387s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466066360s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.016296387s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466066360s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.016296387s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 92 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92 pruub=9.466066360s) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 222.016296387s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.3 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.3 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 942080 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.004095 3 0.000032
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.004208 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.004290 3 0.000233
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.004612 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=92) [0] r=-1 lpr=92 pi=[64,92)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000149 1 0.000335
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000308 1 0.000454
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000029 1 0.000037
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000032 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000026 1 0.000095
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 93 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.c scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.c scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69296128 unmapped: 917504 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 93 handle_osd_map epochs [93,94], i have 94, src has [1,94]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002853 4 0.000078
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002939 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003118 4 0.000168
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003212 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002160 5 0.000733
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000103 1 0.000039
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.002609 5 0.000711
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000760 1 0.000063
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.035466 1 0.000030
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035477 2 0.000077
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000415 1 0.000030
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052404 2 0.000049
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 94 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 860160 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 722174 data_alloc: 218103808 data_used: 24576
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.921801 1 0.000038
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 1.013240 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 2.016468 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 2.016492 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.989589691s) [0] async=[0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 42'1151 active pruub 230.560867310s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.989502907s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.560867310s@ mbc={}] exit Reset 0.000121 1 0.000168
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.989502907s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.560867310s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.989502907s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.560867310s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.989502907s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.560867310s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.989502907s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.560867310s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.989502907s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.560867310s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.975418 1 0.000127
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014207 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 2.017161 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 2.017228 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=93) [0]/[2] async=[0] r=0 lpr=93 pi=[64,93)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.987565994s) [0] async=[0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 42'1151 active pruub 230.559646606s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.987524986s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.559646606s@ mbc={}] exit Reset 0.000074 1 0.000109
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.987524986s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.559646606s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.987524986s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.559646606s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.987524986s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.559646606s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.987524986s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.559646606s@ mbc={}] exit Start 0.000006 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 95 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95 pruub=14.987524986s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 230.559646606s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 843776 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006366 7 0.000063
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000086 1 0.000068
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007888 7 0.000072
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000081 1 0.000084
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 DELETING pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.045789 2 0.000135
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.045929 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.1f( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=93/94 n=5 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.052340 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcf04000/0x0/0x4ffc00000, data 0xa4a7c/0x126000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2bcf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 DELETING pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.096864 2 0.000097
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.096995 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 96 pg[9.f( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=93/94 n=6 ec=51/29 lis/c=93/64 les/c/f=94/65/0 sis=95) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.104924 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 811008 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 811008 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69402624 unmapped: 811008 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 96 handle_osd_map epochs [97,98], i have 96, src has [1,98]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 745472 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 718537 data_alloc: 218103808 data_used: 16384
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 737280 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fcaee000/0x0/0x4ffc00000, data 0xaa8bc/0x12d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fcaee000/0x0/0x4ffc00000, data 0xaa8bc/0x12d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.1e scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.996163368s of 10.044158936s, submitted: 48
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.1e scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69435392 unmapped: 778240 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.13 deep-scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.13 deep-scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 753664 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fcaef000/0x0/0x4ffc00000, data 0xaa8bc/0x12d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 688128 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 99 heartbeat osd_stat(store_statfs(0x4fcaeb000/0x0/0x4ffc00000, data 0xac9a8/0x130000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 679936 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 730388 data_alloc: 218103808 data_used: 16384
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.1d scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.1d scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69558272 unmapped: 655360 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 100 heartbeat osd_stat(store_statfs(0x4fcae7000/0x0/0x4ffc00000, data 0xaeace/0x133000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 100 handle_osd_map epochs [101,103], i have 100, src has [1,103]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 100 handle_osd_map epochs [101,103], i have 103, src has [1,103]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 425984 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 425984 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 409600 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.2 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.2 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 352256 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 745081 data_alloc: 218103808 data_used: 16384
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 319488 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcadd000/0x0/0x4ffc00000, data 0xb69f4/0x13f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 401408 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.054795265s of 10.092762947s, submitted: 63
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcadd000/0x0/0x4ffc00000, data 0xb69f4/0x13f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 393216 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 105 heartbeat osd_stat(store_statfs(0x4fcad9000/0x0/0x4ffc00000, data 0xb8ae0/0x142000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69820416 unmapped: 393216 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 385024 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 755196 data_alloc: 218103808 data_used: 32768
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 1400832 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.17 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.17 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 54.666209 106 0.001250
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 54.667547 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 55.675055 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 55.675094 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=70) [2] r=0 lpr=70 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107 pruub=9.334072113s) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 active pruub 246.056503296s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107 pruub=9.333869934s) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 246.056503296s@ mbc={}] exit Reset 0.000443 1 0.000546
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107 pruub=9.333869934s) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 246.056503296s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107 pruub=9.333869934s) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 246.056503296s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107 pruub=9.333869934s) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 246.056503296s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107 pruub=9.333869934s) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 246.056503296s@ mbc={}] exit Start 0.000057 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 107 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107 pruub=9.333869934s) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 246.056503296s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.601044 3 0.000175
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.601158 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=107) [0] r=-1 lpr=107 pi=[70,107)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000051 1 0.000076
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000037
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 108 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fcad6000/0x0/0x4ffc00000, data 0xbabcc/0x145000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 1392640 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003230 4 0.000061
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003336 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=70/71 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 1384448 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16(unlocked)] enter Initial
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=0 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000039 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=0 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000020
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000194 1 0.000040
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000235 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=70/70 les/c/f=71/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.453176 5 0.000318
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000055 1 0.000039
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000331 1 0.000044
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.032307 2 0.000031
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 109 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 1507328 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.925626 1 0.000050
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 1.411681 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 2.415046 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 2.415075 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[70,108)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.959431 2 0.000049
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.959684 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.959703 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=109) [2] r=0 lpr=109 pi=[72,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000041 1 0.000063
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110 pruub=15.041230202s) [0] async=[0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 42'1151 active pruub 254.780319214s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110 pruub=15.041177750s) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 254.780319214s@ mbc={}] exit Reset 0.000180 1 0.000261
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110 pruub=15.041177750s) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 254.780319214s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110 pruub=15.041177750s) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 254.780319214s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110 pruub=15.041177750s) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 254.780319214s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110 pruub=15.041177750s) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 254.780319214s@ mbc={}] exit Start 0.000058 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110 pruub=15.041177750s) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 254.780319214s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000234 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 110 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 1490944 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777226 data_alloc: 218103808 data_used: 40960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.775000 6 0.000153
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.16( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.775019 5 0.000278
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.16( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.16( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=72/72 les/c/f=73/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001243 2 0.000139
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.16( v 42'1151 lc 35'244 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002357 4 0.000141
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.16( v 42'1151 lc 35'244 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.16( v 42'1151 lc 35'244 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000126 1 0.000134
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.16( v 42'1151 lc 35'244 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=-1 lpr=110 DELETING pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.034359 2 0.000139
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.035659 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.15( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=108/109 n=5 ec=51/29 lis/c=108/70 les/c/f=109/71/0 sis=110) [0] r=-1 lpr=110 pi=[70,110)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.810781 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 1474560 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.055375 1 0.000064
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 111 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.199427 1 0.000030
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.257423 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.032721 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=110) [2]/[0] r=-1 lpr=110 pi=[72,110)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000072 1 0.000111
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001878 2 0.000031
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=26
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=26
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000849 2 0.000054
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 112 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 1458176 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.019710541s of 10.074792862s, submitted: 63
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 112 handle_osd_map epochs [112,113], i have 113, src has [1,113]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007133 2 0.000078
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009928 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=110/111 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=112/113 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=112/113 n=5 ec=51/29 lis/c=110/72 les/c/f=111/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=112/113 n=5 ec=51/29 lis/c=112/72 les/c/f=113/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001188 4 0.000172
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=112/113 n=5 ec=51/29 lis/c=112/72 les/c/f=113/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=112/113 n=5 ec=51/29 lis/c=112/72 les/c/f=113/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 113 pg[9.16( v 42'1151 (0'0,42'1151] local-lis/les=112/113 n=5 ec=51/29 lis/c=112/72 les/c/f=113/73/0 sis=112) [2] r=0 lpr=112 pi=[72,112)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcac4000/0x0/0x4ffc00000, data 0xc6e46/0x157000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 1433600 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 1433600 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 1417216 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789408 data_alloc: 218103808 data_used: 36864
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 114 ms_handle_reset con 0x561f91724c00 session 0x561f93724f00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 1376256 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 1368064 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=80) [2] r=0 lpr=80 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 53.372447 101 0.000252
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=80) [2] r=0 lpr=80 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 53.373926 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=80) [2] r=0 lpr=80 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 54.380716 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=80) [2] r=0 lpr=80 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 54.380754 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=80) [2] r=0 lpr=80 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115 pruub=10.627995491s) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 active pruub 258.434844971s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115 pruub=10.627876282s) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 258.434844971s@ mbc={}] exit Reset 0.000162 1 0.000252
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115 pruub=10.627876282s) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 258.434844971s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115 pruub=10.627876282s) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 258.434844971s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115 pruub=10.627876282s) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 258.434844971s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115 pruub=10.627876282s) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 258.434844971s@ mbc={}] exit Start 0.000088 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 115 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115 pruub=10.627876282s) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 258.434844971s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 115 heartbeat osd_stat(store_statfs(0x4fcabe000/0x0/0x4ffc00000, data 0xcaed4/0x15d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 1359872 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.7 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 12.7 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006800 3 0.000168
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.006942 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=115) [1] r=-1 lpr=115 pi=[80,115)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000058 1 0.000085
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000028 1 0.000034
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 116 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 1343488 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007987 4 0.000049
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.008084 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=80/81 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=80/80 les/c/f=81/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.003174 5 0.000299
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000090 1 0.000083
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.008330 1 0.000041
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049557 2 0.000054
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 117 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 1327104 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 809372 data_alloc: 218103808 data_used: 40960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 117 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.947048 1 0.000061
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 1.008864 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 2.016976 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 2.017007 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[80,116)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118 pruub=14.994240761s) [1] async=[1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 42'1151 active pruub 265.825317383s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118 pruub=14.994033813s) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 265.825317383s@ mbc={}] exit Reset 0.000249 1 0.000740
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118 pruub=14.994033813s) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 265.825317383s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118 pruub=14.994033813s) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 265.825317383s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118 pruub=14.994033813s) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 265.825317383s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118 pruub=14.994033813s) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 265.825317383s@ mbc={}] exit Start 0.000150 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 118 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118 pruub=14.994033813s) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 265.825317383s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 1318912 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.005132 7 0.000537
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000115 1 0.000085
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=-1 lpr=118 DELETING pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053244 2 0.000144
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053419 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 119 pg[9.19( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=116/117 n=5 ec=51/29 lis/c=116/80 les/c/f=117/81/0 sis=118) [1] r=-1 lpr=118 pi=[80,118)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.058839 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 1302528 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.963201523s of 10.010736465s, submitted: 52
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 119 handle_osd_map epochs [120,121], i have 119, src has [1,121]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1245184 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fcaa8000/0x0/0x4ffc00000, data 0xd8f40/0x171000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 1228800 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fcaab000/0x0/0x4ffc00000, data 0xd8f40/0x171000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 1212416 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812512 data_alloc: 218103808 data_used: 45056
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 81.849098 172 0.001346
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active 81.856761 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary 82.869457 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] exit Started 82.869841 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=64) [2] r=0 lpr=64 crt=42'1151 mlcod 0'0 active mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122 pruub=14.151810646s) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 active pruub 270.017272949s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122 pruub=14.151712418s) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 270.017272949s@ mbc={}] exit Reset 0.000138 1 0.000649
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122 pruub=14.151712418s) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 270.017272949s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122 pruub=14.151712418s) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 270.017272949s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122 pruub=14.151712418s) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 270.017272949s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122 pruub=14.151712418s) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 270.017272949s@ mbc={}] exit Start 0.000058 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 122 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122 pruub=14.151712418s) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 270.017272949s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 1196032 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014036 3 0.000153
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.014342 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=122) [1] r=-1 lpr=122 pi=[64,122)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Reset 0.000677 1 0.000976
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] exit Start 0.000056 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000056 1 0.000169
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000061 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 123 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1187840 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004738 4 0.000167
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004978 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=64/65 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 1171456 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=64/64 les/c/f=65/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.829369 5 0.000888
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000103 1 0.000081
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000414 1 0.000081
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045300 2 0.000043
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 124 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 125 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.129974 1 0.000125
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary/Active 1.005390 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started/Primary 2.010405 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] exit Started 2.010502 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=123) [1]/[2] async=[1] r=0 lpr=123 pi=[64,123)/1 crt=42'1151 mlcod 42'1151 active+remapped mbc={255={}}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125 pruub=15.823269844s) [1] async=[1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 42'1151 active pruub 274.714538574s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125 pruub=15.823235512s) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 274.714538574s@ mbc={}] exit Reset 0.000074 1 0.000137
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125 pruub=15.823235512s) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 274.714538574s@ mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125 pruub=15.823235512s) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 274.714538574s@ mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125 pruub=15.823235512s) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 274.714538574s@ mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125 pruub=15.823235512s) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 274.714538574s@ mbc={}] exit Start 0.000007 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 125 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125 pruub=15.823235512s) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY pruub 274.714538574s@ mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 125 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fcaa0000/0x0/0x4ffc00000, data 0xdf10d/0x17a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 1105920 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.123290 6 0.000084
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001238 2 0.000051
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d(unlocked)] enter Initial
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=0 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000044 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=0 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000033 1 0.000048
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000062 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000128 1 0.000172
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000038 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000215 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=-1 lpr=125 DELETING pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.017512 2 0.000176
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.018802 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 126 pg[9.1b( v 42'1151 (0'0,42'1151] lb MIN local-lis/les=123/124 n=5 ec=51/29 lis/c=123/64 les/c/f=124/65/0 sis=125) [1] r=-1 lpr=125 pi=[64,125)/1 crt=42'1151 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.142138 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1024000 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832022 data_alloc: 218103808 data_used: 45056
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 127 handle_osd_map epochs [126,127], i have 127, src has [1,127]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.892298 2 0.000108
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.892608 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.892750 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=126) [2] r=0 lpr=126 pi=[90,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000119 1 0.000265
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000053 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 127 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 127 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1007616 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 128 pg[9.1d( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.594965 5 0.000133
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 128 pg[9.1d( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 128 pg[9.1d( v 42'1151 lc 0'0 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=90/90 les/c/f=91/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 crt=42'1151 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 128 pg[9.1d( v 42'1151 lc 35'540 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002701 4 0.000104
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 128 pg[9.1d( v 42'1151 lc 35'540 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 128 pg[9.1d( v 42'1151 lc 35'540 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000040 1 0.000030
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 128 pg[9.1d( v 42'1151 lc 35'540 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 luod=0'0 crt=42'1151 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.037567 1 0.000048
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 128 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 1015808 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.196848869s of 10.268465996s, submitted: 66
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.382042 1 0.000022
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.422427 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] exit Started 2.017494 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=127) [2]/[0] r=-1 lpr=127 pi=[90,127)/1 luod=0'0 crt=42'1151 mlcod 0'0 active+remapped mbc={}] enter Reset
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 luod=0'0 crt=42'1151 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Reset 0.000051 1 0.000077
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Start
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000023 1 0.000030
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=0/0 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: merge_log_dups log.dups.size()=0olog.dups.size()=36
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=36
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001149 3 0.000033
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 129 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 942080 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 129 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006845 2 0.000111
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008086 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=127/128 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=129/130 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=129/130 n=5 ec=51/29 lis/c=127/90 les/c/f=128/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=129/130 n=5 ec=51/29 lis/c=129/90 les/c/f=130/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001510 4 0.000174
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=129/130 n=5 ec=51/29 lis/c=129/90 les/c/f=130/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=129/130 n=5 ec=51/29 lis/c=129/90 les/c/f=130/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 pg_epoch: 130 pg[9.1d( v 42'1151 (0'0,42'1151] local-lis/les=129/130 n=5 ec=51/29 lis/c=129/90 les/c/f=130/91/0 sis=129) [2] r=0 lpr=129 pi=[90,129)/1 crt=42'1151 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fca91000/0x0/0x4ffc00000, data 0xe916c/0x18a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 933888 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fca8d000/0x0/0x4ffc00000, data 0xeb10e/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 917504 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 854588 data_alloc: 218103808 data_used: 45056
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 892928 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fca8f000/0x0/0x4ffc00000, data 0xeb10e/0x18d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 884736 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 876544 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 794624 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 786432 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 861556 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 133 handle_osd_map epochs [134,135], i have 133, src has [1,135]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 778240 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 761856 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fca7e000/0x0/0x4ffc00000, data 0xf52a4/0x19c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 135 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.072407722s of 10.112970352s, submitted: 31
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 753664 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 753664 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 745472 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 870202 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 737280 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 729088 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 729088 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 720896 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91724800 session 0x561f91631c20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 720896 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 870202 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 720896 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 712704 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 712704 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 704512 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 688128 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 870202 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 679936 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 671744 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f93ee1c00 session 0x561f91fb72c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 655360 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 655360 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 647168 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 870202 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.801046371s of 17.804576874s, submitted: 2
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 630784 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 630784 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 630784 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 606208 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 589824 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 869510 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 573440 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 573440 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 573440 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 532480 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 532480 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871154 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 532480 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.724635124s of 11.736192703s, submitted: 9
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 466944 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 466944 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 458752 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 385024 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 872666 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 352256 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 352256 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 294912 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 294912 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 270336 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 872518 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 245760 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 237568 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 237568 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 237568 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 221184 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871927 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 212992 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.453297615s of 14.466694832s, submitted: 12
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 180224 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 180224 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 172032 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 180224 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871795 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 172032 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 163840 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 163840 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 163840 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 122880 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871795 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 106496 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 98304 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 98304 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 98304 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 98304 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871795 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 90112 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 90112 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 90112 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 81920 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 57344 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871795 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 32768 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 32768 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 32768 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 24576 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 24576 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871795 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72310784 unmapped: 0 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91725c00 session 0x561f934783c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871795 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 1024000 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 999424 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871795 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 983040 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 983040 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.601573944s of 38.603164673s, submitted: 1
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 942080 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871927 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 876544 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 868352 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 860160 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 873455 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 811008 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 811008 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 811008 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874967 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.004404068s of 12.014169693s, submitted: 11
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874360 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 704512 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874228 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874228 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 671744 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 671744 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 671744 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874228 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 655360 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 655360 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91647400 session 0x561f9208c780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 638976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874228 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 630784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 630784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874228 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.241825104s of 32.244922638s, submitted: 2
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 606208 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874360 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 581632 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 548864 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 548864 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 540672 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 532480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877400 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 499712 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 499712 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 483328 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876641 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.004043579s of 12.013555527s, submitted: 12
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 475136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 475136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 475136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 458752 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 450560 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 450560 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 450560 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 442368 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 434176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 417792 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 417792 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 409600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 393216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 393216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 393216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 385024 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 385024 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 385024 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 376832 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 368640 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91724800 session 0x561f916d1680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 352256 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 335872 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 335872 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 327680 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 327680 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.972728729s of 36.975532532s, submitted: 2
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876202 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877730 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 278528 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 262144 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 262144 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 253952 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.007935524s of 12.018008232s, submitted: 11
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876532 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 204800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 204800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 204800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 196608 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 172032 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91725400 session 0x561f9347a960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 33.416950226s of 33.419124603s, submitted: 2
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 172032 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 172032 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876532 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 163840 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 163840 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 139264 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 878060 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 139264 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 131072 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 114688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877301 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.548579216s of 14.557904243s, submitted: 12
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 81920 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877321 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 65536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 65536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 65536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 57344 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 57344 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877321 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f90b34000 session 0x561f931d5680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 57344 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 49152 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 49152 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 40960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 16384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877321 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 16384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877321 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1040384 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1040384 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1040384 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877321 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.721147537s of 24.722063065s, submitted: 1
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1015808 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877453 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 2064384 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 2056192 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 2056192 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 2048000 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 2039808 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880493 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 2039808 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 2023424 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91724c00 session 0x561f94066f00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 2023424 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.950899124s of 10.961990356s, submitted: 12
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 2023424 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 2007040 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879754 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 2007040 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 1998848 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 1998848 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 1990656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 1990656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879754 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 1990656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 1982464 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 1982464 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 1966080 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 1966080 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879886 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 1966080 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.868379593s of 12.871901512s, submitted: 3
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 1990656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 1990656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 1982464 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 917504 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881414 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 917504 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 909312 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 901120 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 892928 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 892928 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880655 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 884736 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 884736 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 876544 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.448137283s of 12.458023071s, submitted: 11
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 860160 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 860160 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 851968 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 851968 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 851968 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 843776 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 835584 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 827392 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 827392 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 827392 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 819200 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 802816 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 794624 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 794624 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 786432 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 786432 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 778240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 778240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 778240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 770048 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 770048 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 761856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 761856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 761856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 753664 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 753664 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 753664 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 745472 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 745472 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 737280 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5911 writes, 25K keys, 5911 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5911 writes, 1013 syncs, 5.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5911 writes, 25K keys, 5911 commit groups, 1.0 writes per commit group, ingest: 19.14 MB, 0.03 MB/s#012Interval WAL: 5911 writes, 1013 syncs, 5.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 679936 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 663552 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 663552 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 663552 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 655360 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 655360 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 647168 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 647168 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 638976 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 638976 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 638976 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 630784 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 630784 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 622592 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 614400 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 614400 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 598016 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 598016 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 589824 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 589824 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 589824 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 581632 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 52.580356598s of 52.581237793s, submitted: 1
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 581632 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 1884160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 1884160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 1884160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 1884160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 1884160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 1867776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 1859584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 1835008 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 1818624 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 1818624 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 1802240 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 1802240 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 1794048 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 1785856 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 1785856 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 1777664 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 1769472 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1761280 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1761280 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1761280 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1753088 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1753088 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1753088 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 1744896 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1728512 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1728512 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1728512 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 1720320 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 1720320 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 1720320 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 1695744 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 1695744 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 1687552 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 1687552 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 1679360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 1679360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 1679360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 1671168 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 1671168 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1662976 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1662976 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 1654784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 1654784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 1654784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1646592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1646592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1646592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 1638400 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 1630208 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 1622016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 1622016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 1622016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 1613824 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 1613824 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 1605632 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 1605632 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1597440 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1597440 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1597440 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1572864 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1572864 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1572864 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1564672 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1564672 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1548288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1548288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1531904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1531904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1507328 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1507328 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1507328 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91647400 session 0x561f9208c960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1482752 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1482752 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 1466368 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 1466368 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 98.924819946s of 99.062004089s, submitted: 246
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880216 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881744 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881744 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.009870529s of 12.023555756s, submitted: 10
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881137 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91725c00 session 0x561f94067680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 77.978752136s of 77.981117249s, submitted: 2
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881137 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884177 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 883418 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91725400 session 0x561f93724f00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.983892441s of 14.993802071s, submitted: 12
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 883438 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 883438 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 1302528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885098 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 1302528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 1302528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.811387062s of 12.817705154s, submitted: 7
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 1286144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 1286144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 1286144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886610 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 1269760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 1269760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 1269760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 1269760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 1261568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885412 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f93ee1c00 session 0x561f92574780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 1261568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 1261568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 1261568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885280 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885280 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.171207428s of 18.178813934s, submitted: 9
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885428 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1236992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1236992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1236992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 1228800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884078 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 1228800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 1228800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 1228800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.994335175s of 14.004615784s, submitted: 11
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884098 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91647400 session 0x561f9485ad20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884098 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884098 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.246137619s of 14.247477531s, submitted: 1
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884230 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 1204224 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 1204224 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 1204224 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 1196032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885758 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 1196032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 1187840 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1179648 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1171456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1171456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885758 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1171456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1171456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1171456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1155072 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1155072 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885758 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.728876114s of 16.739055634s, submitted: 10
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91724800 session 0x561f937670e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1114112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1114112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.609428406s of 35.610633850s, submitted: 1
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1089536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1089536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1089536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 887270 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 887270 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.006770134s of 12.016820908s, submitted: 10
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886663 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 05:03:01 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2721465140' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91724c00 session 0x561f94839c20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f90b34000 session 0x561f94066780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 57.620494843s of 57.623191833s, submitted: 2
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78733312 unmapped: 917504 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78733312 unmapped: 917504 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 909312 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 888323 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91725c00 session 0x561f94878d20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889835 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.819118500s of 10.832632065s, submitted: 12
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889228 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889112 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.820836067s of 11.834323883s, submitted: 10
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892136 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890938 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890806 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890806 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890806 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91647400 session 0x561f934812c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890806 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890806 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.885120392s of 36.895862579s, submitted: 8
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890938 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 835584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 835584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 835584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890954 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 827392 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 827392 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 827392 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890195 data_alloc: 218103808 data_used: 49152
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.011589050s of 12.020524979s, submitted: 11
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889756 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.008609772s of 35.010581970s, submitted: 2
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 1826816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 137 ms_handle_reset con 0x561f91724c00 session 0x561f9485be00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 1753088 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 139 ms_handle_reset con 0x561f93ee1c00 session 0x561f943dd2c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 11018240 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 140 ms_handle_reset con 0x561f93ee0c00 session 0x561f948a14a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 10928128 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939800 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 10928128 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x56f6c8/0x61e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 140 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5f9000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943642 data_alloc: 218103808 data_used: 57344
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f90b34000 session 0x561f948781e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.949914932s of 10.983639717s, submitted: 69
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 10911744 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5f9000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 10911744 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 10911744 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942170 data_alloc: 218103808 data_used: 57344
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942186 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.442098618s of 10.445603371s, submitted: 4
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942318 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f91724800 session 0x561f948a52c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942018 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942170 data_alloc: 218103808 data_used: 57344
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.351478577s of 14.358352661s, submitted: 8
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f93ee1c00 session 0x561f943dde00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f93463800 session 0x561f943dd2c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f91727800 session 0x561f943dc780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942186 data_alloc: 218103808 data_used: 53248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f90b34000 session 0x561f943dd4a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f91724800 session 0x561f916d12c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 6225920 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f91727800 session 0x561f944245a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 6225920 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 6225920 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f93463800 session 0x561f94424780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f93ee1c00 session 0x561f948a1860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f90b34000 session 0x561f94425e00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f91724800 session 0x561f943dcb40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f91727800 session 0x561f94424f00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 17776640 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f93463800 session 0x561f93472d20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 17776640 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054382 data_alloc: 218103808 data_used: 4714496
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbb15000/0x0/0x4ffc00000, data 0x1052938/0x1106000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f93ee1c00 session 0x561f91fb7860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 17768448 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f90b34000 session 0x561f94838780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f91724800 session 0x561f948a14a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 17702912 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 95543296 unmapped: 8183808 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 05:03:01 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483676717' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96468992 unmapped: 7258112 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.981223106s of 12.044069290s, submitted: 85
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96468992 unmapped: 7258112 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132380 data_alloc: 234881024 data_used: 15904768
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbaf1000/0x0/0x4ffc00000, data 0x1076948/0x112b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 7274496 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 7274496 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 7274496 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 7274496 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 7274496 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135846 data_alloc: 234881024 data_used: 15904768
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbaed000/0x0/0x4ffc00000, data 0x107891a/0x112e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96468992 unmapped: 7258112 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 7249920 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104882176 unmapped: 3047424 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbaee000/0x0/0x4ffc00000, data 0x107891a/0x112e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9956000/0x0/0x4ffc00000, data 0x207091a/0x2126000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103817216 unmapped: 4112384 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9956000/0x0/0x4ffc00000, data 0x207091a/0x2126000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103849984 unmapped: 4079616 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262578 data_alloc: 234881024 data_used: 16596992
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103849984 unmapped: 4079616 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103849984 unmapped: 4079616 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103858176 unmapped: 4071424 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9956000/0x0/0x4ffc00000, data 0x207091a/0x2126000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 4030464 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.577157021s of 14.672077179s, submitted: 163
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 3784704 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261510 data_alloc: 234881024 data_used: 16601088
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 3784704 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 3784704 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x209091a/0x2146000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 3751936 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 3751936 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x209091a/0x2146000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104210432 unmapped: 3719168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261510 data_alloc: 234881024 data_used: 16601088
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x209091a/0x2146000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261738 data_alloc: 234881024 data_used: 16601088
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9926000/0x0/0x4ffc00000, data 0x20a091a/0x2156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.475128174s of 12.481987953s, submitted: 7
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91975c00 session 0x561f944243c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f94401400 session 0x561f94067680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 3489792 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f94401c00 session 0x561f920c2000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f94838000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f91630f00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f91630b40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f91631860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282090 data_alloc: 234881024 data_used: 16601088
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924ccc00 session 0x561f91f5d0e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x223997c/0x22f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f93479a40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f93479e00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f934783c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f978b000/0x0/0x4ffc00000, data 0x223999f/0x22f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 4308992 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 4308992 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286323 data_alloc: 234881024 data_used: 17137664
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f978b000/0x0/0x4ffc00000, data 0x223999f/0x22f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 4308992 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7310 writes, 28K keys, 7310 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7310 writes, 1661 syncs, 4.40 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1399 writes, 3452 keys, 1399 commit groups, 1.0 writes per commit group, ingest: 2.70 MB, 0.00 MB/s#012Interval WAL: 1399 writes, 648 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f978b000/0x0/0x4ffc00000, data 0x223999f/0x22f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.559023857s of 12.601721764s, submitted: 51
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286947 data_alloc: 234881024 data_used: 17141760
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9788000/0x0/0x4ffc00000, data 0x223c99f/0x22f4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106471424 unmapped: 3563520 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 3145728 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331257 data_alloc: 234881024 data_used: 17174528
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92c9000/0x0/0x4ffc00000, data 0x26fb99f/0x27b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 3145728 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 3145728 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 3063808 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 3063808 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92c6000/0x0/0x4ffc00000, data 0x26fe99f/0x27b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 3063808 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330449 data_alloc: 234881024 data_used: 17178624
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92c6000/0x0/0x4ffc00000, data 0x26fe99f/0x27b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 3063808 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 3063808 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 3055616 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.937485695s of 14.987780571s, submitted: 64
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f943e23c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 3055616 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc800 session 0x561f920c2000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 3358720 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273125 data_alloc: 234881024 data_used: 16601088
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9508000/0x0/0x4ffc00000, data 0x20ac91a/0x2162000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 3358720 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 3358720 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 3358720 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 3358720 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f948821e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93463800 session 0x561f94882960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f91630d20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 98107392 unmapped: 11927552 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992438 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fad8d000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 12247040 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992438 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fad8d000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992438 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fad8d000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f948385a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fad8d000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992438 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12230656 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12230656 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fad8d000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12230656 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12230656 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.600938797s of 25.782604218s, submitted: 338
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f925725a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f9346c1e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12230656 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992146 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f93482960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f93724960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 27222016 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d90000/0x0/0x4ffc00000, data 0x18288fa/0x18dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d90000/0x0/0x4ffc00000, data 0x18288fa/0x18dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 27222016 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 27213824 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 27205632 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 27205632 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132945 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 27672576 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93463800 session 0x561f948a1e00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 27508736 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d6b000/0x0/0x4ffc00000, data 0x184c91d/0x1901000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 14680064 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 14614528 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 14614528 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268206 data_alloc: 234881024 data_used: 24109056
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 14614528 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d6b000/0x0/0x4ffc00000, data 0x184c91d/0x1901000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d6b000/0x0/0x4ffc00000, data 0x184c91d/0x1901000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.134346008s of 12.186895370s, submitted: 60
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 14548992 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d6b000/0x0/0x4ffc00000, data 0x184c91d/0x1901000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 14548992 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d6b000/0x0/0x4ffc00000, data 0x184c91d/0x1901000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 14548992 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 14548992 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267551 data_alloc: 234881024 data_used: 24117248
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 14548992 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119627776 unmapped: 5242880 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119857152 unmapped: 5013504 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f956e000/0x0/0x4ffc00000, data 0x204991d/0x20fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 4972544 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 4972544 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355601 data_alloc: 234881024 data_used: 25296896
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 4972544 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 4972544 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 4972544 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f956e000/0x0/0x4ffc00000, data 0x204991d/0x20fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.498706818s of 11.555447578s, submitted: 107
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349473 data_alloc: 234881024 data_used: 25309184
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91725c00 session 0x561f93478d20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f954d000/0x0/0x4ffc00000, data 0x206a91d/0x211f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349721 data_alloc: 234881024 data_used: 25309184
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9547000/0x0/0x4ffc00000, data 0x207091d/0x2125000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.617301941s of 11.622908592s, submitted: 4
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349809 data_alloc: 234881024 data_used: 25309184
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118710272 unmapped: 6160384 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118710272 unmapped: 6160384 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118710272 unmapped: 6160384 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9544000/0x0/0x4ffc00000, data 0x207391d/0x2128000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f948383c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f948a12c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f943e34a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104890368 unmapped: 19980288 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010535 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104890368 unmapped: 19980288 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104161280 unmapped: 20709376 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104161280 unmapped: 20709376 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb01c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104161280 unmapped: 20709376 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104194048 unmapped: 20676608 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011479 data_alloc: 218103808 data_used: 4718592
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104194048 unmapped: 20676608 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104194048 unmapped: 20676608 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.913671494s of 12.943838120s, submitted: 48
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010720 data_alloc: 218103808 data_used: 4718592
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010740 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f931d7680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93463800 session 0x561f936c81e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f931d5e00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f948a0780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.803101540s of 10.806558609s, submitted: 3
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f94066f00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f943e2780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee1c00 session 0x561f943e21e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f937661e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f93767860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137748 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9f50000/0x0/0x4ffc00000, data 0x16688a8/0x171c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9f50000/0x0/0x4ffc00000, data 0x16688a8/0x171c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f93767c20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f940665a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee1000 session 0x561f940661e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f940663c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 33996800 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138722 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 33980416 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9f4f000/0x0/0x4ffc00000, data 0x16688b8/0x171d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257282 data_alloc: 234881024 data_used: 22204416
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9f4f000/0x0/0x4ffc00000, data 0x16688b8/0x171d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 26476544 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 26476544 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.129447937s of 16.155773163s, submitted: 23
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 14434304 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1383618 data_alloc: 234881024 data_used: 22441984
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 15581184 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 15581184 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 15581184 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d7f000/0x0/0x4ffc00000, data 0x28388b8/0x28ed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 15548416 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 15548416 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393134 data_alloc: 234881024 data_used: 22614016
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 16302080 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d5b000/0x0/0x4ffc00000, data 0x285c8b8/0x2911000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 16302080 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 16302080 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121749504 unmapped: 16293888 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f948a43c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121749504 unmapped: 16293888 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390878 data_alloc: 234881024 data_used: 22614016
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d5b000/0x0/0x4ffc00000, data 0x285c8b8/0x2911000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121749504 unmapped: 16293888 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d5b000/0x0/0x4ffc00000, data 0x285c8b8/0x2911000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.868580818s of 11.958714485s, submitted: 169
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121872384 unmapped: 16171008 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121872384 unmapped: 16171008 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121872384 unmapped: 16171008 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d50000/0x0/0x4ffc00000, data 0x28678b8/0x291c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 16138240 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391046 data_alloc: 234881024 data_used: 22614016
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 16138240 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 16130048 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f91f5d2c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d50000/0x0/0x4ffc00000, data 0x28678b8/0x291c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 21225472 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8269000/0x0/0x4ffc00000, data 0x334e8b8/0x3403000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 21217280 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8269000/0x0/0x4ffc00000, data 0x334e8b8/0x3403000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 21348352 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470344 data_alloc: 234881024 data_used: 22614016
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 21348352 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 21348352 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.052414894s of 11.076602936s, submitted: 19
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 21348352 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f934d7000 session 0x561f91625a40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e000 session 0x561f93478d20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 21340160 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f943dd860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f916d14a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 21020672 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476050 data_alloc: 234881024 data_used: 22609920
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8265000/0x0/0x4ffc00000, data 0x33518c8/0x3407000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 21659648 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 11460608 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 11460608 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 11436032 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 11436032 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1552051 data_alloc: 251658240 data_used: 34033664
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 11436032 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8241000/0x0/0x4ffc00000, data 0x33758c8/0x342b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 11411456 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 11411456 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f934d6400 session 0x561f91f5dc20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.968343735s of 10.983410835s, submitted: 19
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 11411456 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f823e000/0x0/0x4ffc00000, data 0x33768c8/0x342c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [1])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130342912 unmapped: 11378688 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554567 data_alloc: 251658240 data_used: 34070528
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132513792 unmapped: 9207808 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f7e49000/0x0/0x4ffc00000, data 0x376d8c8/0x3823000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f7e49000/0x0/0x4ffc00000, data 0x376d8c8/0x3823000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f7e45000/0x0/0x4ffc00000, data 0x37718c8/0x3827000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1592305 data_alloc: 251658240 data_used: 34983936
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f7e45000/0x0/0x4ffc00000, data 0x37718c8/0x3827000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f7e45000/0x0/0x4ffc00000, data 0x37718c8/0x3827000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.034820557s of 10.063674927s, submitted: 38
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f92574780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f934d7000 session 0x561f925752c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f94882b40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 16998400 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400896 data_alloc: 234881024 data_used: 22614016
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d4c000/0x0/0x4ffc00000, data 0x286b8b8/0x2920000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 16998400 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 16998400 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f94067e00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f943dc5a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f943dd860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 30433280 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 30449664 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 31473664 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037757 data_alloc: 218103808 data_used: 4718592
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 31473664 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 31473664 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037150 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.915751457s of 15.942548752s, submitted: 41
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037018 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037018 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037018 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.887767792s of 11.889199257s, submitted: 1
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f934d6400 session 0x561f916d0960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 40525824 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 40525824 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 40525824 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa6a0000/0x0/0x4ffc00000, data 0xf19898/0xfcc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f948a14a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 40493056 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109235 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 40501248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176551 data_alloc: 234881024 data_used: 14622720
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa6a0000/0x0/0x4ffc00000, data 0xf19898/0xfcc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176551 data_alloc: 234881024 data_used: 14622720
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.633050919s of 13.654164314s, submitted: 19
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa6a0000/0x0/0x4ffc00000, data 0xf19898/0xfcc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x182f898/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256699 data_alloc: 234881024 data_used: 15204352
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cfa000/0x0/0x4ffc00000, data 0x18b7898/0x196a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252531 data_alloc: 234881024 data_used: 15204352
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9ce3000/0x0/0x4ffc00000, data 0x18d6898/0x1989000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.823400497s of 12.888879776s, submitted: 92
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253139 data_alloc: 234881024 data_used: 15212544
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cd6000/0x0/0x4ffc00000, data 0x18e3898/0x1996000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 31080448 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 31080448 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253771 data_alloc: 234881024 data_used: 15212544
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f000 session 0x561f931d7680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f91f5d860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f9485bc20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530fc00 session 0x561f91624b40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f943ddc20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f000 session 0x561f936c90e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f935a6000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f93724000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f93725c20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f963b000/0x0/0x4ffc00000, data 0x1f7d8fa/0x2031000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 30441472 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 30441472 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f963b000/0x0/0x4ffc00000, data 0x1f7d8fa/0x2031000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 30367744 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f963b000/0x0/0x4ffc00000, data 0x1f7d8fa/0x2031000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f91fb7a40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120889344 unmapped: 30334976 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120889344 unmapped: 30334976 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315215 data_alloc: 234881024 data_used: 15212544
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.798554420s of 12.838501930s, submitted: 53
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f92573680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 30588928 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 31006720 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 28614656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9616000/0x0/0x4ffc00000, data 0x1fa191d/0x2056000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 28614656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9616000/0x0/0x4ffc00000, data 0x1fa191d/0x2056000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 28614656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357201 data_alloc: 234881024 data_used: 20664320
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 28614656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9616000/0x0/0x4ffc00000, data 0x1fa191d/0x2056000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 28557312 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 28557312 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9616000/0x0/0x4ffc00000, data 0x1fa191d/0x2056000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 28508160 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 28508160 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357653 data_alloc: 234881024 data_used: 20668416
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.206544876s of 10.224551201s, submitted: 20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 24453120 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127172608 unmapped: 24051712 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 24043520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 24043520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f895a000/0x0/0x4ffc00000, data 0x2c5791d/0x2d0c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 24043520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471027 data_alloc: 234881024 data_used: 21590016
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127229952 unmapped: 23994368 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127229952 unmapped: 23994368 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f895d000/0x0/0x4ffc00000, data 0x2c5a91d/0x2d0f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 23986176 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 23986176 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 23977984 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470131 data_alloc: 234881024 data_used: 21594112
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 23977984 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.881290436s of 10.984512329s, submitted: 180
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8934000/0x0/0x4ffc00000, data 0x2c8391d/0x2d38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472299 data_alloc: 234881024 data_used: 21581824
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8934000/0x0/0x4ffc00000, data 0x2c8391d/0x2d38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8931000/0x0/0x4ffc00000, data 0x2c8691d/0x2d3b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470231 data_alloc: 234881024 data_used: 21581824
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f892e000/0x0/0x4ffc00000, data 0x2c8991d/0x2d3e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.997168541s of 11.008128166s, submitted: 21
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8929000/0x0/0x4ffc00000, data 0x2c8e91d/0x2d43000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471191 data_alloc: 234881024 data_used: 21581824
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8926000/0x0/0x4ffc00000, data 0x2c9191d/0x2d46000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8926000/0x0/0x4ffc00000, data 0x2c9191d/0x2d46000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470743 data_alloc: 234881024 data_used: 21581824
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8923000/0x0/0x4ffc00000, data 0x2c9491d/0x2d49000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471335 data_alloc: 234881024 data_used: 21581824
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.687617302s of 15.696373940s, submitted: 9
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f891e000/0x0/0x4ffc00000, data 0x2c9991d/0x2d4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471207 data_alloc: 234881024 data_used: 21581824
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471291 data_alloc: 234881024 data_used: 21581824
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f891e000/0x0/0x4ffc00000, data 0x2c9991d/0x2d4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 23945216 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 23945216 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 23945216 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473307 data_alloc: 234881024 data_used: 21569536
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f891b000/0x0/0x4ffc00000, data 0x2c9c91d/0x2d51000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.476576805s of 12.488263130s, submitted: 20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471383 data_alloc: 234881024 data_used: 21569536
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8916000/0x0/0x4ffc00000, data 0x2ca191d/0x2d56000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8913000/0x0/0x4ffc00000, data 0x2ca491d/0x2d59000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471807 data_alloc: 234881024 data_used: 21569536
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8913000/0x0/0x4ffc00000, data 0x2ca491d/0x2d59000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.613974571s of 11.620789528s, submitted: 6
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f91630d20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc400 session 0x561f943e2000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e800 session 0x561f93ba45a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1276303 data_alloc: 234881024 data_used: 15196160
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9863000/0x0/0x4ffc00000, data 0x1920898/0x19d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9863000/0x0/0x4ffc00000, data 0x1920898/0x19d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f948a1860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f9346d0e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e800 session 0x561f94067860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9889000/0x0/0x4ffc00000, data 0x1920898/0x19d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068175 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068175 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd800 session 0x561f936c9680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e400 session 0x561f937670e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068175 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068175 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f93ba50e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f91631a40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd800 session 0x561f9346c000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530ec00 session 0x561f943dcb40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.735616684s of 25.796653748s, submitted: 96
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f91fb7e00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099441 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa93f000/0x0/0x4ffc00000, data 0x86a898/0x91d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f916d0b40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099441 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f916d1680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f916d1c20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa93f000/0x0/0x4ffc00000, data 0x86a898/0x91d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f935a72c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118505472 unmapped: 32718848 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118513664 unmapped: 32710656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118513664 unmapped: 32710656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118513664 unmapped: 32710656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa93e000/0x0/0x4ffc00000, data 0x86a8a8/0x91e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118513664 unmapped: 32710656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120892 data_alloc: 218103808 data_used: 7512064
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd800 session 0x561f94878f00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.444226265s of 12.461299896s, submitted: 19
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc400 session 0x561f94838000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118513664 unmapped: 32710656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f936c9e00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074433 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f9346d2c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f9485a5a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f936c8960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 33120256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f91630b40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f948794a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f93ba5e00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 33120256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137758 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 33120256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5da000/0x0/0x4ffc00000, data 0xbcf898/0xc82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 33120256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f935a6780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc400 session 0x561f935a74a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118120448 unmapped: 33103872 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118120448 unmapped: 33103872 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5da000/0x0/0x4ffc00000, data 0xbcf898/0xc82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f935a7860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.693018913s of 13.733590126s, submitted: 58
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f91f5d0e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 34054144 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137575 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5da000/0x0/0x4ffc00000, data 0xbcf898/0xc82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 34037760 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 34439168 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5da000/0x0/0x4ffc00000, data 0xbcf898/0xc82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5da000/0x0/0x4ffc00000, data 0xbcf898/0xc82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 34439168 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 34439168 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f94067680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f9208da40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f948a0960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 35528704 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083927 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 35528704 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 35528704 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083943 data_alloc: 218103808 data_used: 4718592
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083943 data_alloc: 218103808 data_used: 4718592
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083943 data_alloc: 218103808 data_used: 4718592
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.829650879s of 21.862865448s, submitted: 51
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f94067e00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f93481680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35028992 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193586 data_alloc: 218103808 data_used: 4718592
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35028992 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d71000/0x0/0x4ffc00000, data 0x14378fa/0x14eb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 34947072 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 34947072 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f948a0b40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 34701312 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 34832384 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195862 data_alloc: 218103808 data_used: 4816896
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d4d000/0x0/0x4ffc00000, data 0x145b8fa/0x150f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1290406 data_alloc: 234881024 data_used: 18866176
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d4d000/0x0/0x4ffc00000, data 0x145b8fa/0x150f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.984073639s of 18.020271301s, submitted: 42
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 20914176 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387620 data_alloc: 234881024 data_used: 19660800
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9222000/0x0/0x4ffc00000, data 0x1f868fa/0x203a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130203648 unmapped: 21020672 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 21004288 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 21004288 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 21004288 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 21004288 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400426 data_alloc: 234881024 data_used: 19869696
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9194000/0x0/0x4ffc00000, data 0x20138fa/0x20c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 20971520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 20971520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 20971520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9174000/0x0/0x4ffc00000, data 0x20348fa/0x20e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 20971520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 20971520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399554 data_alloc: 234881024 data_used: 19906560
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.606201172s of 11.678620338s, submitted: 110
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130359296 unmapped: 20865024 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f934732c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e400 session 0x561f93472d20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f93ba4b40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100573 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100573 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100573 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 32759808 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread fragmentation_score=0.000501 took=0.000042s
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 32759808 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100573 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 32759808 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 32759808 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f948a12c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f91f5c000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f948a1a40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e800 session 0x561f91624f00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.195787430s of 22.213541031s, submitted: 29
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118611968 unmapped: 32612352 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f9208c1e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f91631860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f943dda40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f936c90e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f000 session 0x561f9346c5a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x11698a8/0x121d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x11698a8/0x121d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190084 data_alloc: 218103808 data_used: 4726784
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x11698a8/0x121d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f94878b40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f93ba5860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x11698a8/0x121d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f935a6f00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f920c2d20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 35987456 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194598 data_alloc: 218103808 data_used: 4726784
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 35979264 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 33488896 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 33488896 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33480704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x118d8b8/0x1242000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33480704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278634 data_alloc: 234881024 data_used: 17166336
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33480704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33480704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33480704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f91fb7e00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f91fb74a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f91fb72c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f91fb7680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.770525932s of 15.796881676s, submitted: 26
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f91fb6000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f948792c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f94878780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f9347af00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f91f5d680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 32866304 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x118d8b8/0x1242000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 32833536 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339484 data_alloc: 234881024 data_used: 17166336
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 128753664 unmapped: 26140672 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f940663c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09c00 session 0x561f91fb70e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f91fb7a40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127229952 unmapped: 27664384 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f937661e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127574016 unmapped: 27320320 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92ec000/0x0/0x4ffc00000, data 0x1eb795c/0x1f70000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451231 data_alloc: 234881024 data_used: 24518656
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92ec000/0x0/0x4ffc00000, data 0x1eb795c/0x1f70000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451247 data_alloc: 234881024 data_used: 24518656
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92ec000/0x0/0x4ffc00000, data 0x1eb795c/0x1f70000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 21815296 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 21815296 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.282657623s of 13.364300728s, submitted: 110
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537375 data_alloc: 234881024 data_used: 25477120
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f880d000/0x0/0x4ffc00000, data 0x299695c/0x2a4f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f87ee000/0x0/0x4ffc00000, data 0x29b595c/0x2a6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f87ee000/0x0/0x4ffc00000, data 0x29b595c/0x2a6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1536631 data_alloc: 234881024 data_used: 25481216
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f87ee000/0x0/0x4ffc00000, data 0x29b595c/0x2a6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.975310326s of 12.046205521s, submitted: 111
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138788864 unmapped: 16105472 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1536935 data_alloc: 234881024 data_used: 25481216
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f87e1000/0x0/0x4ffc00000, data 0x29c295c/0x2a7b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138821632 unmapped: 16072704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f87e1000/0x0/0x4ffc00000, data 0x29c295c/0x2a7b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138862592 unmapped: 16031744 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138862592 unmapped: 16031744 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f948a12c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f9346cd20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138870784 unmapped: 16023552 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08400 session 0x561f948381e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 18325504 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349479 data_alloc: 234881024 data_used: 17551360
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 18325504 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 18325504 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f99e6000/0x0/0x4ffc00000, data 0x17bf8b8/0x1874000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f9485a3c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530ec00 session 0x561f936c8960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f916d0000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126994 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126994 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126994 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126994 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f93767a40
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f937672c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f93766000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f93767860
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.702108383s of 26.754915237s, submitted: 81
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530ec00 session 0x561f93766780
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f934801e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f91625680
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f931d5c20
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f916d03c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac31000/0x0/0x4ffc00000, data 0x5778c1/0x62b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530ec00 session 0x561f948a05a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f948a10e0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210092 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f948a0000
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f925743c0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa1e2000/0x0/0x4ffc00000, data 0xfc590a/0x107a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129728512 unmapped: 33562624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa1e2000/0x0/0x4ffc00000, data 0xfc590a/0x107a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283506 data_alloc: 234881024 data_used: 15085568
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa1e2000/0x0/0x4ffc00000, data 0xfc590a/0x107a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa1e2000/0x0/0x4ffc00000, data 0xfc590a/0x107a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283506 data_alloc: 234881024 data_used: 15085568
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.073707581s of 16.113704681s, submitted: 50
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135405568 unmapped: 27885568 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135364608 unmapped: 27926528 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135364608 unmapped: 27926528 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8766000/0x0/0x4ffc00000, data 0x18a190a/0x1956000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135364608 unmapped: 27926528 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358618 data_alloc: 234881024 data_used: 15777792
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135364608 unmapped: 27926528 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135446528 unmapped: 27844608 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135446528 unmapped: 27844608 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8766000/0x0/0x4ffc00000, data 0x18a190a/0x1956000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135446528 unmapped: 27844608 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 27836416 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356882 data_alloc: 234881024 data_used: 15781888
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 27836416 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8747000/0x0/0x4ffc00000, data 0x18c090a/0x1975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 27836416 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 27836416 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.588966370s of 11.650872231s, submitted: 111
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 27836416 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f92574f00
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530ec00 session 0x561f934734a0
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f948a0960
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: do_command 'config diff' '{prefix=config diff}'
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: do_command 'config show' '{prefix=config show}'
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 32980992 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 33103872 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:03:01 np0005534696 ceph-osd[77914]: do_command 'log dump' '{prefix=log dump}'
Nov 25 05:03:01 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 05:03:01 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2140606591' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 05:03:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 05:03:02 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/193731057' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 05:03:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:02.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 05:03:02 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366599518' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 05:03:02 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 25 05:03:02 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/33019584' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 05:03:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:03.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:03 np0005534696 podman[240418]: 2025-11-25 10:03:03.440350248 +0000 UTC m=+0.082768731 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 05:03:03 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 25 05:03:03 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3116723934' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 05:03:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:04 np0005534696 nova_compute[228704]: 2025-11-25 10:03:04.124 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:04 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 25 05:03:04 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/774446042' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 05:03:04 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Nov 25 05:03:04 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4109658705' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 25 05:03:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:04.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:04 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 25 05:03:04 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/290580060' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 05:03:04 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 25 05:03:04 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1703170186' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 05:03:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-crash-compute-2[76069]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Nov 25 05:03:04 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 25 05:03:04 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/771778548' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 05:03:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3119001524' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 05:03:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2123548578' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 05:03:05 np0005534696 nova_compute[228704]: 2025-11-25 10:03:05.113 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3730769304' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 05:03:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:03:05.357 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:03:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:03:05.358 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:03:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:03:05.358 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4267367237' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 05:03:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3265515761' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/20258841' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1025545117' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 25 05:03:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:06 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 25 05:03:06 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3744911263' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 05:03:06 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Nov 25 05:03:06 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516758813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 25 05:03:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:06.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:06 np0005534696 systemd[1]: Starting Hostname Service...
Nov 25 05:03:06 np0005534696 systemd[1]: Started Hostname Service.
Nov 25 05:03:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:07.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:07 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 25 05:03:07 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/522284379' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 05:03:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:08.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:08 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 25 05:03:08 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1669203688' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 05:03:08 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 05:03:08 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 05:03:08 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 05:03:08 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 05:03:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:09 np0005534696 nova_compute[228704]: 2025-11-25 10:03:09.126 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:09 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 05:03:09 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605452620' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 05:03:09 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 25 05:03:09 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/471202242' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 05:03:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:09.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:09 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 05:03:09 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 05:03:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Nov 25 05:03:10 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3490083567' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 25 05:03:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:10 np0005534696 nova_compute[228704]: 2025-11-25 10:03:10.113 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 25 05:03:10 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2934849903' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 05:03:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 25 05:03:10 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3270856375' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 05:03:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:10.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Nov 25 05:03:10 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1648111896' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 25 05:03:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:11 np0005534696 podman[241707]: 2025-11-25 10:03:11.028261945 +0000 UTC m=+0.110069613 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 05:03:11 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 25 05:03:11 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3299312650' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 05:03:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:11.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:11 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Nov 25 05:03:11 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2060492933' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 25 05:03:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:12 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Nov 25 05:03:12 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/73558721' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 25 05:03:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:12.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:12 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Nov 25 05:03:12 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1362293775' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 25 05:03:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:13.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:13 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Nov 25 05:03:13 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1257740203' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 25 05:03:13 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Nov 25 05:03:13 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1956004436' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 25 05:03:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:14 np0005534696 nova_compute[228704]: 2025-11-25 10:03:14.127 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:14.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:14 np0005534696 ovs-appctl[243032]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 05:03:14 np0005534696 ovs-appctl[243039]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 05:03:14 np0005534696 ovs-appctl[243044]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 05:03:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Nov 25 05:03:15 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1141962610' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 05:03:15 np0005534696 nova_compute[228704]: 2025-11-25 10:03:15.115 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 05:03:15 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3477 syncs, 3.28 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4105 writes, 14K keys, 4105 commit groups, 1.0 writes per commit group, ingest: 17.00 MB, 0.03 MB/s#012Interval WAL: 4105 writes, 1816 syncs, 2.26 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 05:03:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:15.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 25 05:03:16 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2300953125' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 05:03:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Nov 25 05:03:16 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1812479581' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 25 05:03:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Nov 25 05:03:16 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1702968464' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 25 05:03:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:16.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Nov 25 05:03:16 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2826370956' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 25 05:03:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 05:03:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1562817346' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 05:03:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:17.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Nov 25 05:03:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/537900919' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 25 05:03:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Nov 25 05:03:18 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2147542039' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 25 05:03:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Nov 25 05:03:18 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3973466451' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 25 05:03:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:18.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:19 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Nov 25 05:03:19 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3433541028' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 25 05:03:19 np0005534696 nova_compute[228704]: 2025-11-25 10:03:19.129 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:19 np0005534696 podman[244566]: 2025-11-25 10:03:19.343234915 +0000 UTC m=+0.053366187 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 05:03:19 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Nov 25 05:03:19 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/258153879' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 25 05:03:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:19.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Nov 25 05:03:20 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3697817588' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 25 05:03:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:20 np0005534696 nova_compute[228704]: 2025-11-25 10:03:20.116 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:20 np0005534696 nova_compute[228704]: 2025-11-25 10:03:20.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:20.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Nov 25 05:03:20 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3958941425' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 25 05:03:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:21 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Nov 25 05:03:21 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4072822949' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 25 05:03:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:21.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:22 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Nov 25 05:03:22 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1370730776' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 25 05:03:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:22 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Nov 25 05:03:22 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3438577591' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 25 05:03:22 np0005534696 nova_compute[228704]: 2025-11-25 10:03:22.373 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:22 np0005534696 nova_compute[228704]: 2025-11-25 10:03:22.373 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 05:03:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:22.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:23 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 25 05:03:23 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2620515358' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 05:03:23 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 25 05:03:23 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2439157298' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 05:03:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:23.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:23 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Nov 25 05:03:23 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2815871477' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 25 05:03:23 np0005534696 virtqemud[228342]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 05:03:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:24 np0005534696 nova_compute[228704]: 2025-11-25 10:03:24.130 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:24 np0005534696 nova_compute[228704]: 2025-11-25 10:03:24.367 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:24.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:24 np0005534696 systemd[1]: Starting Time & Date Service...
Nov 25 05:03:24 np0005534696 nova_compute[228704]: 2025-11-25 10:03:24.552 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:03:24 np0005534696 nova_compute[228704]: 2025-11-25 10:03:24.552 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:03:24 np0005534696 nova_compute[228704]: 2025-11-25 10:03:24.553 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:03:24 np0005534696 nova_compute[228704]: 2025-11-25 10:03:24.553 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 05:03:24 np0005534696 nova_compute[228704]: 2025-11-25 10:03:24.553 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:03:24 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 25 05:03:24 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/350362228' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 25 05:03:24 np0005534696 systemd[1]: Started Time & Date Service.
Nov 25 05:03:24 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:03:24 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2759359847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:03:24 np0005534696 nova_compute[228704]: 2025-11-25 10:03:24.903 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:03:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:24 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:24 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:24 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.117 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.168 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.169 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4693MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.169 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.169 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:03:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:25.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.464 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.464 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.481 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:03:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:03:25 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2944657567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.826 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.829 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.847 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.848 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 05:03:25 np0005534696 nova_compute[228704]: 2025-11-25 10:03:25.848 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:03:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:03:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:26.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:03:26 np0005534696 nova_compute[228704]: 2025-11-25 10:03:26.838 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:26 np0005534696 nova_compute[228704]: 2025-11-25 10:03:26.838 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:26 np0005534696 nova_compute[228704]: 2025-11-25 10:03:26.839 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 05:03:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:27 np0005534696 nova_compute[228704]: 2025-11-25 10:03:27.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:27 np0005534696 nova_compute[228704]: 2025-11-25 10:03:27.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:27 np0005534696 nova_compute[228704]: 2025-11-25 10:03:27.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 05:03:27 np0005534696 nova_compute[228704]: 2025-11-25 10:03:27.370 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 05:03:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:27.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:28.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:29 np0005534696 nova_compute[228704]: 2025-11-25 10:03:29.132 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:03:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:29.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:03:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:30 np0005534696 nova_compute[228704]: 2025-11-25 10:03:30.118 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:30 np0005534696 nova_compute[228704]: 2025-11-25 10:03:30.370 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:30 np0005534696 nova_compute[228704]: 2025-11-25 10:03:30.371 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:03:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:30.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:03:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:31.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:32 np0005534696 nova_compute[228704]: 2025-11-25 10:03:32.357 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:32 np0005534696 nova_compute[228704]: 2025-11-25 10:03:32.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 05:03:32 np0005534696 nova_compute[228704]: 2025-11-25 10:03:32.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 05:03:32 np0005534696 nova_compute[228704]: 2025-11-25 10:03:32.367 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 05:03:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:32.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:33 np0005534696 podman[245739]: 2025-11-25 10:03:33.103843301 +0000 UTC m=+0.032230935 container create f611fec7c69a3176ebdfa56edc053f4084fc30c9bb13704ab2969cdb140de3b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 25 05:03:33 np0005534696 systemd[1]: Started libpod-conmon-f611fec7c69a3176ebdfa56edc053f4084fc30c9bb13704ab2969cdb140de3b4.scope.
Nov 25 05:03:33 np0005534696 systemd[1]: Started libcrun container.
Nov 25 05:03:33 np0005534696 podman[245739]: 2025-11-25 10:03:33.161311819 +0000 UTC m=+0.089699453 container init f611fec7c69a3176ebdfa56edc053f4084fc30c9bb13704ab2969cdb140de3b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_euclid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 05:03:33 np0005534696 podman[245739]: 2025-11-25 10:03:33.167016199 +0000 UTC m=+0.095403843 container start f611fec7c69a3176ebdfa56edc053f4084fc30c9bb13704ab2969cdb140de3b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Nov 25 05:03:33 np0005534696 podman[245739]: 2025-11-25 10:03:33.168306181 +0000 UTC m=+0.096693815 container attach f611fec7c69a3176ebdfa56edc053f4084fc30c9bb13704ab2969cdb140de3b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 25 05:03:33 np0005534696 peaceful_euclid[245753]: 167 167
Nov 25 05:03:33 np0005534696 systemd[1]: libpod-f611fec7c69a3176ebdfa56edc053f4084fc30c9bb13704ab2969cdb140de3b4.scope: Deactivated successfully.
Nov 25 05:03:33 np0005534696 conmon[245753]: conmon f611fec7c69a3176ebdf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f611fec7c69a3176ebdfa56edc053f4084fc30c9bb13704ab2969cdb140de3b4.scope/container/memory.events
Nov 25 05:03:33 np0005534696 podman[245739]: 2025-11-25 10:03:33.173210553 +0000 UTC m=+0.101598187 container died f611fec7c69a3176ebdfa56edc053f4084fc30c9bb13704ab2969cdb140de3b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 05:03:33 np0005534696 systemd[1]: var-lib-containers-storage-overlay-2684a9b1cc665f7511b56b298aa78ef4364dccc237a146f910bc3573765653ce-merged.mount: Deactivated successfully.
Nov 25 05:03:33 np0005534696 podman[245739]: 2025-11-25 10:03:33.091089845 +0000 UTC m=+0.019477499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 05:03:33 np0005534696 podman[245739]: 2025-11-25 10:03:33.192458258 +0000 UTC m=+0.120845892 container remove f611fec7c69a3176ebdfa56edc053f4084fc30c9bb13704ab2969cdb140de3b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 25 05:03:33 np0005534696 systemd[1]: libpod-conmon-f611fec7c69a3176ebdfa56edc053f4084fc30c9bb13704ab2969cdb140de3b4.scope: Deactivated successfully.
Nov 25 05:03:33 np0005534696 podman[245776]: 2025-11-25 10:03:33.31767072 +0000 UTC m=+0.030547412 container create 1e282e05d2d98f790990398f671d7e1a349685ade6a3078a525271f0dfa85c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wescoff, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 25 05:03:33 np0005534696 systemd[1]: Started libpod-conmon-1e282e05d2d98f790990398f671d7e1a349685ade6a3078a525271f0dfa85c65.scope.
Nov 25 05:03:33 np0005534696 systemd[1]: Started libcrun container.
Nov 25 05:03:33 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5886c8cf03b7b7f24483355f41c8fb11b40c7bab7b0da36adbfa1f9979f097/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 25 05:03:33 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5886c8cf03b7b7f24483355f41c8fb11b40c7bab7b0da36adbfa1f9979f097/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 25 05:03:33 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5886c8cf03b7b7f24483355f41c8fb11b40c7bab7b0da36adbfa1f9979f097/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 25 05:03:33 np0005534696 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c5886c8cf03b7b7f24483355f41c8fb11b40c7bab7b0da36adbfa1f9979f097/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 25 05:03:33 np0005534696 podman[245776]: 2025-11-25 10:03:33.38623028 +0000 UTC m=+0.099106982 container init 1e282e05d2d98f790990398f671d7e1a349685ade6a3078a525271f0dfa85c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 25 05:03:33 np0005534696 podman[245776]: 2025-11-25 10:03:33.391474664 +0000 UTC m=+0.104351366 container start 1e282e05d2d98f790990398f671d7e1a349685ade6a3078a525271f0dfa85c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wescoff, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 25 05:03:33 np0005534696 podman[245776]: 2025-11-25 10:03:33.39271429 +0000 UTC m=+0.105591012 container attach 1e282e05d2d98f790990398f671d7e1a349685ade6a3078a525271f0dfa85c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 25 05:03:33 np0005534696 podman[245776]: 2025-11-25 10:03:33.306960585 +0000 UTC m=+0.019837297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 25 05:03:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:33.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:33 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]: [
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:    {
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:        "available": false,
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:        "being_replaced": false,
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:        "ceph_device_lvm": false,
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:        "lsm_data": {},
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:        "lvs": [],
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:        "path": "/dev/sr0",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:        "rejected_reasons": [
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "Has a FileSystem",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "Insufficient space (<5GB)"
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:        ],
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:        "sys_api": {
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "actuators": null,
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "device_nodes": [
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:                "sr0"
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            ],
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "devname": "sr0",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "human_readable_size": "474.00 KB",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "id_bus": "ata",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "model": "QEMU DVD-ROM",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "nr_requests": "64",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "parent": "/dev/sr0",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "partitions": {},
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "path": "/dev/sr0",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "removable": "1",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "rev": "2.5+",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "ro": "0",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "rotational": "1",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "sas_address": "",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "sas_device_handle": "",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "scheduler_mode": "mq-deadline",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "sectors": 0,
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "sectorsize": "2048",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "size": 485376.0,
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "support_discard": "2048",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "type": "disk",
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:            "vendor": "QEMU"
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:        }
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]:    }
Nov 25 05:03:33 np0005534696 thirsty_wescoff[245789]: ]
Nov 25 05:03:33 np0005534696 systemd[1]: libpod-1e282e05d2d98f790990398f671d7e1a349685ade6a3078a525271f0dfa85c65.scope: Deactivated successfully.
Nov 25 05:03:33 np0005534696 podman[245776]: 2025-11-25 10:03:33.92636854 +0000 UTC m=+0.639245252 container died 1e282e05d2d98f790990398f671d7e1a349685ade6a3078a525271f0dfa85c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wescoff, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 25 05:03:33 np0005534696 systemd[1]: var-lib-containers-storage-overlay-6c5886c8cf03b7b7f24483355f41c8fb11b40c7bab7b0da36adbfa1f9979f097-merged.mount: Deactivated successfully.
Nov 25 05:03:33 np0005534696 podman[245776]: 2025-11-25 10:03:33.964204387 +0000 UTC m=+0.677081089 container remove 1e282e05d2d98f790990398f671d7e1a349685ade6a3078a525271f0dfa85c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_wescoff, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 25 05:03:33 np0005534696 systemd[1]: libpod-conmon-1e282e05d2d98f790990398f671d7e1a349685ade6a3078a525271f0dfa85c65.scope: Deactivated successfully.
Nov 25 05:03:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:34 np0005534696 podman[246965]: 2025-11-25 10:03:34.055885253 +0000 UTC m=+0.101566007 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 05:03:34 np0005534696 nova_compute[228704]: 2025-11-25 10:03:34.134 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:34 np0005534696 nova_compute[228704]: 2025-11-25 10:03:34.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:34 np0005534696 nova_compute[228704]: 2025-11-25 10:03:34.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:03:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:34.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:35 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 05:03:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:35 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 05:03:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:35 np0005534696 nova_compute[228704]: 2025-11-25 10:03:35.121 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:35.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:35 np0005534696 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 05:03:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:36.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:37.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.038121) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018038156, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1677, "num_deletes": 250, "total_data_size": 3513800, "memory_usage": 3576512, "flush_reason": "Manual Compaction"}
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018044395, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2264580, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28624, "largest_seqno": 30296, "table_properties": {"data_size": 2256608, "index_size": 4402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 21290, "raw_average_key_size": 21, "raw_value_size": 2238936, "raw_average_value_size": 2270, "num_data_blocks": 190, "num_entries": 986, "num_filter_entries": 986, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764064931, "oldest_key_time": 1764064931, "file_creation_time": 1764065018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 6302 microseconds, and 4053 cpu microseconds.
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.044425) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2264580 bytes OK
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.044439) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.044851) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.044866) EVENT_LOG_v1 {"time_micros": 1764065018044862, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.044883) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3505015, prev total WAL file size 3505015, number of live WAL files 2.
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.045511) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2211KB)], [54(13MB)]
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018045541, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 16170493, "oldest_snapshot_seqno": -1}
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6321 keys, 14907842 bytes, temperature: kUnknown
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018077449, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 14907842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14864660, "index_size": 26304, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 162211, "raw_average_key_size": 25, "raw_value_size": 14749907, "raw_average_value_size": 2333, "num_data_blocks": 1060, "num_entries": 6321, "num_filter_entries": 6321, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764065018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.077709) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 14907842 bytes
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.080760) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 504.6 rd, 465.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 13.3 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(13.7) write-amplify(6.6) OK, records in: 6839, records dropped: 518 output_compression: NoCompression
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.080773) EVENT_LOG_v1 {"time_micros": 1764065018080767, "job": 32, "event": "compaction_finished", "compaction_time_micros": 32046, "compaction_time_cpu_micros": 22760, "output_level": 6, "num_output_files": 1, "total_output_size": 14907842, "num_input_records": 6839, "num_output_records": 6321, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018081374, "job": 32, "event": "table_file_deletion", "file_number": 56}
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065018083059, "job": 32, "event": "table_file_deletion", "file_number": 54}
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.045477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.083159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.083162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.083163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.083164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:03:38 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:03:38.083165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:03:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:38.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:39 np0005534696 nova_compute[228704]: 2025-11-25 10:03:39.135 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:03:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:39.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:40 np0005534696 nova_compute[228704]: 2025-11-25 10:03:40.123 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:40.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:41 np0005534696 podman[247023]: 2025-11-25 10:03:41.348843988 +0000 UTC m=+0.058429861 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 05:03:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:41.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:42.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:03:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:43.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:03:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:43 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:43 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:43 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:43 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:44 np0005534696 nova_compute[228704]: 2025-11-25 10:03:44.137 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:03:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:44.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:03:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:45 np0005534696 nova_compute[228704]: 2025-11-25 10:03:45.125 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:45.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:46.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:03:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:47.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:03:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:47 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:47 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:47 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:47 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:48.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:49 np0005534696 nova_compute[228704]: 2025-11-25 10:03:49.139 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:49.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:50 np0005534696 nova_compute[228704]: 2025-11-25 10:03:50.127 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:50 np0005534696 podman[247079]: 2025-11-25 10:03:50.216259578 +0000 UTC m=+0.043633145 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 05:03:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:50.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:51.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:51 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:51 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:51 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:51 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:52.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:53.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:54 np0005534696 nova_compute[228704]: 2025-11-25 10:03:54.141 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:54.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:54 np0005534696 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 05:03:54 np0005534696 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 05:03:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:55 np0005534696 nova_compute[228704]: 2025-11-25 10:03:55.128 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:55.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:03:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:03:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:03:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:03:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:03:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:03:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:56.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:57.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:03:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:03:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:03:58.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:03:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:03:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:03:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:03:59 np0005534696 nova_compute[228704]: 2025-11-25 10:03:59.143 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:03:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:03:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:03:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:03:59.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:00 np0005534696 nova_compute[228704]: 2025-11-25 10:04:00.131 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:04:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:04:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:01 np0005534696 systemd-logind[744]: Session 54 logged out. Waiting for processes to exit.
Nov 25 05:04:01 np0005534696 systemd[1]: session-54.scope: Deactivated successfully.
Nov 25 05:04:01 np0005534696 systemd[1]: session-54.scope: Consumed 2min 5.495s CPU time, 709.1M memory peak, read 283.1M from disk, written 87.7M to disk.
Nov 25 05:04:01 np0005534696 systemd-logind[744]: Removed session 54.
Nov 25 05:04:01 np0005534696 systemd-logind[744]: New session 55 of user zuul.
Nov 25 05:04:01 np0005534696 systemd[1]: Started Session 55 of User zuul.
Nov 25 05:04:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:01.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:01 np0005534696 systemd[1]: session-55.scope: Deactivated successfully.
Nov 25 05:04:01 np0005534696 systemd-logind[744]: Session 55 logged out. Waiting for processes to exit.
Nov 25 05:04:01 np0005534696 systemd-logind[744]: Removed session 55.
Nov 25 05:04:01 np0005534696 systemd-logind[744]: New session 56 of user zuul.
Nov 25 05:04:01 np0005534696 systemd[1]: Started Session 56 of User zuul.
Nov 25 05:04:01 np0005534696 systemd[1]: session-56.scope: Deactivated successfully.
Nov 25 05:04:01 np0005534696 systemd-logind[744]: Session 56 logged out. Waiting for processes to exit.
Nov 25 05:04:01 np0005534696 systemd-logind[744]: Removed session 56.
Nov 25 05:04:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:02.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:03.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:04 np0005534696 nova_compute[228704]: 2025-11-25 10:04:04.145 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:04 np0005534696 podman[247198]: 2025-11-25 10:04:04.324674745 +0000 UTC m=+0.034722691 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 05:04:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:04.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:05 np0005534696 nova_compute[228704]: 2025-11-25 10:04:05.134 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:04:05.358 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:04:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:04:05.359 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:04:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:04:05.359 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:04:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:05.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:04:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:06.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:04:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:07.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:08.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:08 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:08 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:08 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:08 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:09 np0005534696 nova_compute[228704]: 2025-11-25 10:04:09.147 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:09.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:10 np0005534696 nova_compute[228704]: 2025-11-25 10:04:10.134 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:10.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:11.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:12 np0005534696 podman[247223]: 2025-11-25 10:04:12.344153865 +0000 UTC m=+0.057034951 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 05:04:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:04:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:12.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:04:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:13.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:14 np0005534696 nova_compute[228704]: 2025-11-25 10:04:14.150 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:14.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:15 np0005534696 nova_compute[228704]: 2025-11-25 10:04:15.135 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:15.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:16.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:04:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:17.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:04:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:18 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:18 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:18 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:18 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:18.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:19 np0005534696 nova_compute[228704]: 2025-11-25 10:04:19.153 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:19.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:20 np0005534696 nova_compute[228704]: 2025-11-25 10:04:20.136 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:20 np0005534696 podman[247254]: 2025-11-25 10:04:20.333153632 +0000 UTC m=+0.040771441 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 05:04:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:20.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:21 np0005534696 nova_compute[228704]: 2025-11-25 10:04:21.351 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:04:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:21.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:22.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:22 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:22 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:22 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:23 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:04:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:23.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:04:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:24 np0005534696 nova_compute[228704]: 2025-11-25 10:04:24.156 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:04:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:24.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:04:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:25 np0005534696 nova_compute[228704]: 2025-11-25 10:04:25.138 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:04:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:25.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:04:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.427 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.427 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.427 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.427 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.428 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:04:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:04:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:26.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:04:26 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:04:26 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2853664828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.763 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.959 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.960 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4833MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.960 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:04:26 np0005534696 nova_compute[228704]: 2025-11-25 10:04:26.960 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:04:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.174 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.174 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.190 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing inventories for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.372 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating ProviderTree inventory for provider e8eea1e0-1833-4152-af65-8b442fac3e0d from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.372 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating inventory in ProviderTree for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.386 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing aggregate associations for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.429 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing trait associations for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d, traits: HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX512VAES,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.456 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:04:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:27.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:27 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:04:27 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/404904334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.788 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.792 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.809 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.810 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 05:04:27 np0005534696 nova_compute[228704]: 2025-11-25 10:04:27.810 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:04:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:27 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:27 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:27 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:28 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.051305) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068051366, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 708, "num_deletes": 256, "total_data_size": 1436441, "memory_usage": 1462336, "flush_reason": "Manual Compaction"}
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068054461, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 946095, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30301, "largest_seqno": 31004, "table_properties": {"data_size": 942585, "index_size": 1354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7391, "raw_average_key_size": 18, "raw_value_size": 935661, "raw_average_value_size": 2282, "num_data_blocks": 60, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764065018, "oldest_key_time": 1764065018, "file_creation_time": 1764065068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 3179 microseconds, and 2346 cpu microseconds.
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.054495) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 946095 bytes OK
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.054505) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.054839) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.054848) EVENT_LOG_v1 {"time_micros": 1764065068054846, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.054861) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1432633, prev total WAL file size 1432633, number of live WAL files 2.
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.055306) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(923KB)], [57(14MB)]
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068055352, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 15853937, "oldest_snapshot_seqno": -1}
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6208 keys, 15728138 bytes, temperature: kUnknown
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068092627, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 15728138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15684348, "index_size": 27168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 161010, "raw_average_key_size": 25, "raw_value_size": 15570199, "raw_average_value_size": 2508, "num_data_blocks": 1094, "num_entries": 6208, "num_filter_entries": 6208, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764065068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.092783) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 15728138 bytes
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.093373) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 424.8 rd, 421.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.2 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(33.4) write-amplify(16.6) OK, records in: 6731, records dropped: 523 output_compression: NoCompression
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.093388) EVENT_LOG_v1 {"time_micros": 1764065068093381, "job": 34, "event": "compaction_finished", "compaction_time_micros": 37324, "compaction_time_cpu_micros": 22251, "output_level": 6, "num_output_files": 1, "total_output_size": 15728138, "num_input_records": 6731, "num_output_records": 6208, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068093569, "job": 34, "event": "table_file_deletion", "file_number": 59}
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065068095601, "job": 34, "event": "table_file_deletion", "file_number": 57}
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.055245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.095620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.095622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.095624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.095625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:04:28 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:04:28.095626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:04:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:04:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:28.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:04:28 np0005534696 nova_compute[228704]: 2025-11-25 10:04:28.811 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:04:28 np0005534696 nova_compute[228704]: 2025-11-25 10:04:28.812 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 05:04:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:29 np0005534696 nova_compute[228704]: 2025-11-25 10:04:29.158 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:29 np0005534696 nova_compute[228704]: 2025-11-25 10:04:29.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:04:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:29.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:30 np0005534696 nova_compute[228704]: 2025-11-25 10:04:30.138 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:30.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:31 np0005534696 nova_compute[228704]: 2025-11-25 10:04:31.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:04:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:31.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:32 np0005534696 nova_compute[228704]: 2025-11-25 10:04:32.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:04:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:32.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:32 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:32 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:32 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:33.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:34 np0005534696 nova_compute[228704]: 2025-11-25 10:04:34.159 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:34 np0005534696 nova_compute[228704]: 2025-11-25 10:04:34.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:04:34 np0005534696 nova_compute[228704]: 2025-11-25 10:04:34.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 05:04:34 np0005534696 nova_compute[228704]: 2025-11-25 10:04:34.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 05:04:34 np0005534696 nova_compute[228704]: 2025-11-25 10:04:34.388 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 05:04:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:34.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:35 np0005534696 nova_compute[228704]: 2025-11-25 10:04:35.140 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:35 np0005534696 podman[247355]: 2025-11-25 10:04:35.327156119 +0000 UTC m=+0.033456424 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 05:04:35 np0005534696 nova_compute[228704]: 2025-11-25 10:04:35.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:04:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:35.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:36 np0005534696 nova_compute[228704]: 2025-11-25 10:04:36.351 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:04:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:36.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:37.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:37 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:37 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:37 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:38 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:38.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:39 np0005534696 nova_compute[228704]: 2025-11-25 10:04:39.161 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:39.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:04:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:04:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 05:04:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:04:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:04:39 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 05:04:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:40 np0005534696 nova_compute[228704]: 2025-11-25 10:04:40.143 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:04:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:40.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:04:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:41.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:04:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:42.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:04:42 np0005534696 podman[247547]: 2025-11-25 10:04:42.681671406 +0000 UTC m=+0.054954420 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 05:04:42 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:04:42 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:04:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:42 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:42 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:42 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:43 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:43.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:44 np0005534696 nova_compute[228704]: 2025-11-25 10:04:44.164 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:44.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:45 np0005534696 nova_compute[228704]: 2025-11-25 10:04:45.144 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:45.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:46.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:47.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:47 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:47 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:47 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:48 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:48.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:49 np0005534696 nova_compute[228704]: 2025-11-25 10:04:49.167 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:49.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:50 np0005534696 nova_compute[228704]: 2025-11-25 10:04:50.145 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:50.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:51 np0005534696 podman[247605]: 2025-11-25 10:04:51.328374474 +0000 UTC m=+0.040635035 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 05:04:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:04:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:51.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:04:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:52.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:52 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:53 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:53 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:53 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:53.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:54 np0005534696 nova_compute[228704]: 2025-11-25 10:04:54.169 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:04:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:54.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:04:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:55 np0005534696 nova_compute[228704]: 2025-11-25 10:04:55.146 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:55.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:04:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:56.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:57.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:04:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:57 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:04:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:57 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:04:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:58 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:04:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:04:58 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:04:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:04:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:04:58.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:04:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:04:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:04:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:04:59 np0005534696 nova_compute[228704]: 2025-11-25 10:04:59.171 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:04:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:04:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:04:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:04:59.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:00 np0005534696 nova_compute[228704]: 2025-11-25 10:05:00.147 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:05:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:00.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:05:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:01.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:02.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:02 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:02 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:02 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:03 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:03.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:04 np0005534696 nova_compute[228704]: 2025-11-25 10:05:04.173 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:04.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:05 np0005534696 nova_compute[228704]: 2025-11-25 10:05:05.149 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:05:05.359 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:05:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:05:05.360 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:05:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:05:05.360 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:05:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:05.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:06 np0005534696 podman[247663]: 2025-11-25 10:05:06.329396985 +0000 UTC m=+0.040869496 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 05:05:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:06.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:05:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:07.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:05:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:07 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:07 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:07 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:08 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:05:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:08.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:05:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:09 np0005534696 nova_compute[228704]: 2025-11-25 10:05:09.175 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:09.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:10 np0005534696 nova_compute[228704]: 2025-11-25 10:05:10.150 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:05:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:10.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:05:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:11.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:12.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:12 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:12 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:12 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:13 np0005534696 podman[247686]: 2025-11-25 10:05:13.357419182 +0000 UTC m=+0.067677308 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller)
Nov 25 05:05:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:13.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:14 np0005534696 nova_compute[228704]: 2025-11-25 10:05:14.176 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:14.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:15 np0005534696 nova_compute[228704]: 2025-11-25 10:05:15.151 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:15.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:16.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:17.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:17 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:18.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:19 np0005534696 nova_compute[228704]: 2025-11-25 10:05:19.178 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:19.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:20 np0005534696 nova_compute[228704]: 2025-11-25 10:05:20.155 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:20.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:21.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:22 np0005534696 podman[247717]: 2025-11-25 10:05:22.331234411 +0000 UTC m=+0.042958754 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 25 05:05:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:22.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:23.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:24 np0005534696 nova_compute[228704]: 2025-11-25 10:05:24.181 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.373416) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124373438, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 829, "num_deletes": 251, "total_data_size": 1763199, "memory_usage": 1793360, "flush_reason": "Manual Compaction"}
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124376904, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 1160685, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31009, "largest_seqno": 31833, "table_properties": {"data_size": 1156760, "index_size": 1705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8970, "raw_average_key_size": 19, "raw_value_size": 1148812, "raw_average_value_size": 2524, "num_data_blocks": 74, "num_entries": 455, "num_filter_entries": 455, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764065068, "oldest_key_time": 1764065068, "file_creation_time": 1764065124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 3511 microseconds, and 2455 cpu microseconds.
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.376929) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 1160685 bytes OK
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.376940) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377210) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377220) EVENT_LOG_v1 {"time_micros": 1764065124377217, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377231) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1758909, prev total WAL file size 1758909, number of live WAL files 2.
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377622) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(1133KB)], [60(14MB)]
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124377666, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 16888823, "oldest_snapshot_seqno": -1}
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6145 keys, 14784529 bytes, temperature: kUnknown
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124414493, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 14784529, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14741903, "index_size": 26132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15429, "raw_key_size": 160360, "raw_average_key_size": 26, "raw_value_size": 14629521, "raw_average_value_size": 2380, "num_data_blocks": 1047, "num_entries": 6145, "num_filter_entries": 6145, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764065124, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.414684) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14784529 bytes
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.415069) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 458.1 rd, 401.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 15.0 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(27.3) write-amplify(12.7) OK, records in: 6663, records dropped: 518 output_compression: NoCompression
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.415083) EVENT_LOG_v1 {"time_micros": 1764065124415076, "job": 36, "event": "compaction_finished", "compaction_time_micros": 36866, "compaction_time_cpu_micros": 21814, "output_level": 6, "num_output_files": 1, "total_output_size": 14784529, "num_input_records": 6663, "num_output_records": 6145, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124415306, "job": 36, "event": "table_file_deletion", "file_number": 62}
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065124417205, "job": 36, "event": "table_file_deletion", "file_number": 60}
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.377592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.417235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.417240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.417241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.417242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:05:24 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:05:24.417243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:05:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:24.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:25 np0005534696 nova_compute[228704]: 2025-11-25 10:05:25.156 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:25.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:26.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.378 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.378 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.378 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.378 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.379 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:05:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:27.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:27 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:05:27 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/251651596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.718 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:05:27 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:05:27 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3162503918' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.919 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.920 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4855MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.920 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.921 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.983 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.983 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 05:05:27 np0005534696 nova_compute[228704]: 2025-11-25 10:05:27.998 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:05:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:28 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:05:28 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2471864516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:05:28 np0005534696 nova_compute[228704]: 2025-11-25 10:05:28.341 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:05:28 np0005534696 nova_compute[228704]: 2025-11-25 10:05:28.344 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:05:28 np0005534696 nova_compute[228704]: 2025-11-25 10:05:28.357 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:05:28 np0005534696 nova_compute[228704]: 2025-11-25 10:05:28.358 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 05:05:28 np0005534696 nova_compute[228704]: 2025-11-25 10:05:28.358 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:05:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:28.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:29 np0005534696 nova_compute[228704]: 2025-11-25 10:05:29.182 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:29 np0005534696 nova_compute[228704]: 2025-11-25 10:05:29.358 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:05:29 np0005534696 nova_compute[228704]: 2025-11-25 10:05:29.358 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:05:29 np0005534696 nova_compute[228704]: 2025-11-25 10:05:29.359 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:05:29 np0005534696 nova_compute[228704]: 2025-11-25 10:05:29.359 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 05:05:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:29.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:30 np0005534696 nova_compute[228704]: 2025-11-25 10:05:30.159 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:30.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:31 np0005534696 nova_compute[228704]: 2025-11-25 10:05:31.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:05:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:31.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:32 np0005534696 nova_compute[228704]: 2025-11-25 10:05:32.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:05:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:32.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:33.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:34 np0005534696 nova_compute[228704]: 2025-11-25 10:05:34.185 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:34 np0005534696 nova_compute[228704]: 2025-11-25 10:05:34.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:05:34 np0005534696 nova_compute[228704]: 2025-11-25 10:05:34.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 05:05:34 np0005534696 nova_compute[228704]: 2025-11-25 10:05:34.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 05:05:34 np0005534696 nova_compute[228704]: 2025-11-25 10:05:34.370 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 05:05:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:05:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:34.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:05:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:35 np0005534696 nova_compute[228704]: 2025-11-25 10:05:35.160 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:35.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:36.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:37 np0005534696 podman[247818]: 2025-11-25 10:05:37.320105293 +0000 UTC m=+0.033406250 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 05:05:37 np0005534696 nova_compute[228704]: 2025-11-25 10:05:37.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:05:37 np0005534696 nova_compute[228704]: 2025-11-25 10:05:37.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:05:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:37 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:37 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:37 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:37 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:38.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:39 np0005534696 nova_compute[228704]: 2025-11-25 10:05:39.187 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:39.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:40 np0005534696 nova_compute[228704]: 2025-11-25 10:05:40.162 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:05:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:40.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:05:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:41.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:41 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:41 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:41 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:41 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:42.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:43.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:43 np0005534696 podman[248010]: 2025-11-25 10:05:43.932545988 +0000 UTC m=+0.080780093 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 05:05:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:44 np0005534696 nova_compute[228704]: 2025-11-25 10:05:44.188 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:05:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:05:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:05:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:05:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:05:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:05:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 05:05:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:05:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:05:44 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 05:05:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:44.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:45 np0005534696 nova_compute[228704]: 2025-11-25 10:05:45.163 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:05:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:45.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:05:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:46 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:05:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:46.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:47.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:05:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:48.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:49 np0005534696 nova_compute[228704]: 2025-11-25 10:05:49.189 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:49.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:50 np0005534696 nova_compute[228704]: 2025-11-25 10:05:50.164 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:05:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:50.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:05:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:51.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:05:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:52.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:05:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:53 np0005534696 podman[248069]: 2025-11-25 10:05:53.332384491 +0000 UTC m=+0.044661674 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 05:05:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:53.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:54 np0005534696 nova_compute[228704]: 2025-11-25 10:05:54.192 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:54.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:05:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:05:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:05:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:05:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:05:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:55 np0005534696 nova_compute[228704]: 2025-11-25 10:05:55.165 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:55.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:05:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:05:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:56.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:05:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:57.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:05:58.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:05:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:05:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:05:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:05:59 np0005534696 nova_compute[228704]: 2025-11-25 10:05:59.195 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:05:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:05:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:05:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:05:59.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:00 np0005534696 nova_compute[228704]: 2025-11-25 10:06:00.167 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:00.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:01.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:02.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:03.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:04 np0005534696 nova_compute[228704]: 2025-11-25 10:06:04.198 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:06:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:04.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:06:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:05 np0005534696 nova_compute[228704]: 2025-11-25 10:06:05.169 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:06:05.360 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:06:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:06:05.360 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:06:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:06:05.360 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:06:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:05.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:06.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:07.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:08 np0005534696 podman[248126]: 2025-11-25 10:06:08.325161284 +0000 UTC m=+0.037230367 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 05:06:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:08.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:09 np0005534696 nova_compute[228704]: 2025-11-25 10:06:09.200 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:09.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:10 np0005534696 nova_compute[228704]: 2025-11-25 10:06:10.171 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:10.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:06:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:11.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:06:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:12.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:13.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:14 np0005534696 nova_compute[228704]: 2025-11-25 10:06:14.203 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:14 np0005534696 podman[248148]: 2025-11-25 10:06:14.359552506 +0000 UTC m=+0.066593016 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 05:06:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:14.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:15 np0005534696 nova_compute[228704]: 2025-11-25 10:06:15.173 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:15.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:16.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:17.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:18.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:19 np0005534696 nova_compute[228704]: 2025-11-25 10:06:19.206 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:20 np0005534696 nova_compute[228704]: 2025-11-25 10:06:20.174 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:20.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:21.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:22.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:23.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:24 np0005534696 podman[248205]: 2025-11-25 10:06:24.039122335 +0000 UTC m=+0.071012445 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 05:06:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:24 np0005534696 nova_compute[228704]: 2025-11-25 10:06:24.209 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:24.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:24 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:24 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:24 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:25 np0005534696 nova_compute[228704]: 2025-11-25 10:06:25.175 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:25 np0005534696 nova_compute[228704]: 2025-11-25 10:06:25.351 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:06:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:25.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:26.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:27.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:28.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.212 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.378 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.378 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.378 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.378 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:06:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:29.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:29 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:06:29 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/407174291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.721 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.920 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.921 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4872MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.921 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.922 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.981 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.981 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 05:06:29 np0005534696 nova_compute[228704]: 2025-11-25 10:06:29.993 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:06:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:30 np0005534696 nova_compute[228704]: 2025-11-25 10:06:30.177 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:06:30 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2902791618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:06:30 np0005534696 nova_compute[228704]: 2025-11-25 10:06:30.369 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:06:30 np0005534696 nova_compute[228704]: 2025-11-25 10:06:30.373 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:06:30 np0005534696 nova_compute[228704]: 2025-11-25 10:06:30.385 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:06:30 np0005534696 nova_compute[228704]: 2025-11-25 10:06:30.386 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 05:06:30 np0005534696 nova_compute[228704]: 2025-11-25 10:06:30.386 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:06:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:30.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:31 np0005534696 nova_compute[228704]: 2025-11-25 10:06:31.386 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:06:31 np0005534696 nova_compute[228704]: 2025-11-25 10:06:31.387 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 05:06:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:31.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:32 np0005534696 nova_compute[228704]: 2025-11-25 10:06:32.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:06:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:32.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:33 np0005534696 nova_compute[228704]: 2025-11-25 10:06:33.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:06:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:33.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:34 np0005534696 nova_compute[228704]: 2025-11-25 10:06:34.215 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:34 np0005534696 nova_compute[228704]: 2025-11-25 10:06:34.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:06:34 np0005534696 nova_compute[228704]: 2025-11-25 10:06:34.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 05:06:34 np0005534696 nova_compute[228704]: 2025-11-25 10:06:34.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 05:06:34 np0005534696 nova_compute[228704]: 2025-11-25 10:06:34.372 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 05:06:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:34.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:35 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:35 np0005534696 nova_compute[228704]: 2025-11-25 10:06:35.177 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:35.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:36.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:37.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:38.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:39 np0005534696 nova_compute[228704]: 2025-11-25 10:06:39.218 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:39 np0005534696 podman[248282]: 2025-11-25 10:06:39.329221502 +0000 UTC m=+0.041008388 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 05:06:39 np0005534696 nova_compute[228704]: 2025-11-25 10:06:39.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:06:39 np0005534696 nova_compute[228704]: 2025-11-25 10:06:39.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:06:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:39.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:40 np0005534696 nova_compute[228704]: 2025-11-25 10:06:40.180 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:40.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:06:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:41.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:06:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:42.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:43.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:44 np0005534696 nova_compute[228704]: 2025-11-25 10:06:44.219 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:44.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:45 np0005534696 nova_compute[228704]: 2025-11-25 10:06:45.182 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:45 np0005534696 podman[248329]: 2025-11-25 10:06:45.354291859 +0000 UTC m=+0.060017823 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 05:06:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:06:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:45.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:06:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:46.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:47.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 25 05:06:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 25 05:06:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 25 05:06:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 05:06:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:06:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:06:47 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 05:06:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:48.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:49 np0005534696 nova_compute[228704]: 2025-11-25 10:06:49.222 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:49.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:50 np0005534696 nova_compute[228704]: 2025-11-25 10:06:50.183 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:50.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:06:50 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:06:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:51.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:06:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:53.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:06:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:54 np0005534696 nova_compute[228704]: 2025-11-25 10:06:54.225 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:54 np0005534696 podman[248465]: 2025-11-25 10:06:54.363273052 +0000 UTC m=+0.075460006 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 05:06:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:54.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:06:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:06:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:06:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:06:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:55 np0005534696 nova_compute[228704]: 2025-11-25 10:06:55.186 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:55.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:06:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:56.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:06:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:57.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:06:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:58 np0005534696 nova_compute[228704]: 2025-11-25 10:06:58.378 228708 DEBUG oslo_concurrency.processutils [None req-dbea7ddb-eb32-4e63-954a-7e69465c4db7 331b917bd3774be79aebd5ee1af3b1fa f414368112e54eacbcaf4af631b3b667 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:06:58 np0005534696 nova_compute[228704]: 2025-11-25 10:06:58.393 228708 DEBUG oslo_concurrency.processutils [None req-dbea7ddb-eb32-4e63-954a-7e69465c4db7 331b917bd3774be79aebd5ee1af3b1fa f414368112e54eacbcaf4af631b3b667 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:06:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:06:58.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:06:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:06:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:06:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:06:59 np0005534696 nova_compute[228704]: 2025-11-25 10:06:59.226 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:06:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:06:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:06:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:06:59.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:06:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:00 np0005534696 nova_compute[228704]: 2025-11-25 10:07:00.187 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:00.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:01.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:02.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:03.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:04 np0005534696 nova_compute[228704]: 2025-11-25 10:07:04.229 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:04 np0005534696 nova_compute[228704]: 2025-11-25 10:07:04.424 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:04 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:07:04.424 142676 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:6d:06', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'e2:28:10:f4:a6:5c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 25 05:07:04 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:07:04.425 142676 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 25 05:07:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:04.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:05 np0005534696 nova_compute[228704]: 2025-11-25 10:07:05.189 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:07:05.361 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:07:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:07:05.361 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:07:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:07:05.361 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:07:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:05.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:06.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:07.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:08 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:07:08.427 142676 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=f116e443-3007-4d69-b0d6-1b58bbc026ea, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 25 05:07:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:08.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:09 np0005534696 nova_compute[228704]: 2025-11-25 10:07:09.232 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:07:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:09.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:07:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:10 np0005534696 nova_compute[228704]: 2025-11-25 10:07:10.191 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:10 np0005534696 podman[248524]: 2025-11-25 10:07:10.351715513 +0000 UTC m=+0.049468111 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 05:07:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:10.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:12.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:13.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:14 np0005534696 nova_compute[228704]: 2025-11-25 10:07:14.234 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:14.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:15 np0005534696 nova_compute[228704]: 2025-11-25 10:07:15.193 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:15.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:16 np0005534696 podman[248546]: 2025-11-25 10:07:16.362745108 +0000 UTC m=+0.070771991 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 05:07:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:17.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:18.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:19 np0005534696 nova_compute[228704]: 2025-11-25 10:07:19.235 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:20 np0005534696 nova_compute[228704]: 2025-11-25 10:07:20.196 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:20.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:21.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:07:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:22.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:07:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:23.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:23 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:23 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:23 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:24 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:24 np0005534696 nova_compute[228704]: 2025-11-25 10:07:24.238 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:07:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:24.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:07:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:25 np0005534696 nova_compute[228704]: 2025-11-25 10:07:25.199 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:25 np0005534696 podman[248604]: 2025-11-25 10:07:25.331263793 +0000 UTC m=+0.043152488 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 05:07:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:07:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:26.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:07:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:07:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:27.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:07:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:28.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:28 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:28 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:28 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:28 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:29 np0005534696 nova_compute[228704]: 2025-11-25 10:07:29.241 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:29.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:30 np0005534696 nova_compute[228704]: 2025-11-25 10:07:30.199 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:30.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.379 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.380 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.380 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.380 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.380 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:07:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:31.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:31 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:07:31 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3449064442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.719 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.903 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.904 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4867MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.904 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.904 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.946 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.947 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 05:07:31 np0005534696 nova_compute[228704]: 2025-11-25 10:07:31.962 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:07:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:32 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:07:32 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2299839423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:07:32 np0005534696 nova_compute[228704]: 2025-11-25 10:07:32.300 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:07:32 np0005534696 nova_compute[228704]: 2025-11-25 10:07:32.303 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:07:32 np0005534696 nova_compute[228704]: 2025-11-25 10:07:32.312 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:07:32 np0005534696 nova_compute[228704]: 2025-11-25 10:07:32.313 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 05:07:32 np0005534696 nova_compute[228704]: 2025-11-25 10:07:32.314 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:07:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:07:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:32.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:07:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:32 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:32 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:32 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:32 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:33 np0005534696 nova_compute[228704]: 2025-11-25 10:07:33.314 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:07:33 np0005534696 nova_compute[228704]: 2025-11-25 10:07:33.315 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 05:07:33 np0005534696 nova_compute[228704]: 2025-11-25 10:07:33.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:07:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:33.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:34 np0005534696 nova_compute[228704]: 2025-11-25 10:07:34.243 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:34 np0005534696 nova_compute[228704]: 2025-11-25 10:07:34.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:07:34 np0005534696 nova_compute[228704]: 2025-11-25 10:07:34.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 05:07:34 np0005534696 nova_compute[228704]: 2025-11-25 10:07:34.356 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 05:07:34 np0005534696 nova_compute[228704]: 2025-11-25 10:07:34.371 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 05:07:34 np0005534696 nova_compute[228704]: 2025-11-25 10:07:34.372 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:07:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:07:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:34.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:07:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:35 np0005534696 nova_compute[228704]: 2025-11-25 10:07:35.200 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:35.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:36.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:36 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:36 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:36 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:36 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:07:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:37.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:07:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:38.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:39 np0005534696 nova_compute[228704]: 2025-11-25 10:07:39.246 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:39 np0005534696 nova_compute[228704]: 2025-11-25 10:07:39.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:07:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:07:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:39.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:07:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:40 np0005534696 nova_compute[228704]: 2025-11-25 10:07:40.202 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:40 np0005534696 nova_compute[228704]: 2025-11-25 10:07:40.351 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:07:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:40.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:41 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:41 np0005534696 podman[248681]: 2025-11-25 10:07:41.328444353 +0000 UTC m=+0.035826320 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 05:07:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:41.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:42.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:43.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:44 np0005534696 nova_compute[228704]: 2025-11-25 10:07:44.248 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:44.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:45 np0005534696 nova_compute[228704]: 2025-11-25 10:07:45.202 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:45.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:46.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:47 np0005534696 podman[248728]: 2025-11-25 10:07:47.353992349 +0000 UTC m=+0.066825043 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 05:07:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:47.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:48.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:49 np0005534696 nova_compute[228704]: 2025-11-25 10:07:49.251 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:07:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:49.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:07:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:50 np0005534696 nova_compute[228704]: 2025-11-25 10:07:50.203 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:50.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:51 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:51 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:51 np0005534696 podman[248861]: 2025-11-25 10:07:51.225783969 +0000 UTC m=+0.036065943 container exec 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 25 05:07:51 np0005534696 podman[248861]: 2025-11-25 10:07:51.307804941 +0000 UTC m=+0.118086914 container exec_died 548c61af73ea2c2d8e1a0aa3927a3ad79d66310bd190d6f7eb2019a499789e96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-mon-compute-2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 25 05:07:51 np0005534696 podman[248970]: 2025-11-25 10:07:51.661197382 +0000 UTC m=+0.037153622 container exec 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 05:07:51 np0005534696 podman[248970]: 2025-11-25 10:07:51.66889554 +0000 UTC m=+0.044851781 container exec_died 8271bd955aed3df4e903b5e454c60f3216df7bd61d6eac4a79634fc9ae303c67 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 05:07:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:51.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:51 np0005534696 podman[249040]: 2025-11-25 10:07:51.866282666 +0000 UTC m=+0.036514477 container exec 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 05:07:51 np0005534696 podman[249040]: 2025-11-25 10:07:51.872854361 +0000 UTC m=+0.043086153 container exec_died 7cd6930c2f3b5430640bed25b224bdf7461d3f2b47ab25d80bde97924cf6f6b5 (image=quay.io/ceph/haproxy:2.3, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-haproxy-rgw-default-compute-2-jrahab)
Nov 25 05:07:52 np0005534696 podman[249092]: 2025-11-25 10:07:52.013836482 +0000 UTC m=+0.035697778 container exec 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, release=1793)
Nov 25 05:07:52 np0005534696 podman[249092]: 2025-11-25 10:07:52.025899648 +0000 UTC m=+0.047760944 container exec_died 74b1242ce4d2a34275d511874526a21980a9f11b9260d48e0518e964c3fd3aa8 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9)
Nov 25 05:07:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:52 np0005534696 podman[249132]: 2025-11-25 10:07:52.138681562 +0000 UTC m=+0.038112161 container exec afc8a7f7775bc1eadf7be781d688d2da8cb2b20920163da5aa215e0ea842a9a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 25 05:07:52 np0005534696 podman[249132]: 2025-11-25 10:07:52.145830195 +0000 UTC m=+0.045260793 container exec_died afc8a7f7775bc1eadf7be781d688d2da8cb2b20920163da5aa215e0ea842a9a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 25 05:07:52 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:07:52 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:07:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:07:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:52.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:07:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 05:07:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:07:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:07:53 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 05:07:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:53.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:54 np0005534696 nova_compute[228704]: 2025-11-25 10:07:54.253 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:07:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:54.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:07:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:55 np0005534696 nova_compute[228704]: 2025-11-25 10:07:55.206 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:07:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:55.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:07:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:07:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:07:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:07:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:56 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:07:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:07:56 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:07:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:56 np0005534696 podman[249269]: 2025-11-25 10:07:56.33537591 +0000 UTC m=+0.047562239 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 05:07:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:56.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:57 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:07:57 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:07:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:57.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:07:58.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:07:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:07:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:07:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:07:59 np0005534696 nova_compute[228704]: 2025-11-25 10:07:59.255 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:07:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:07:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:07:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:07:59.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:00 np0005534696 nova_compute[228704]: 2025-11-25 10:08:00.207 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:00.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:01 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:08:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:01.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:08:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:02.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:08:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:03.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:08:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:04 np0005534696 nova_compute[228704]: 2025-11-25 10:08:04.256 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:04.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:05 np0005534696 nova_compute[228704]: 2025-11-25 10:08:05.209 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:08:05.361 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:08:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:08:05.362 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:08:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:08:05.362 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:08:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:05.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:06 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:06.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:07.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:08.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:09 np0005534696 nova_compute[228704]: 2025-11-25 10:08:09.259 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:09.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:10 np0005534696 nova_compute[228704]: 2025-11-25 10:08:10.210 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:10.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:11 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:11.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:12 np0005534696 podman[249353]: 2025-11-25 10:08:12.32614169 +0000 UTC m=+0.037464699 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 05:08:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:12.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:13.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:14 np0005534696 nova_compute[228704]: 2025-11-25 10:08:14.260 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:14.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:15 np0005534696 nova_compute[228704]: 2025-11-25 10:08:15.211 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:08:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:15.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:08:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:16 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:16 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:16 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:16.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:17.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:18 np0005534696 podman[249375]: 2025-11-25 10:08:18.347328427 +0000 UTC m=+0.059378640 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 05:08:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:18.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:19 np0005534696 nova_compute[228704]: 2025-11-25 10:08:19.262 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:19.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:20 np0005534696 nova_compute[228704]: 2025-11-25 10:08:20.213 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:20.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:21.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:22 np0005534696 nova_compute[228704]: 2025-11-25 10:08:22.358 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:22.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:23.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:24 np0005534696 nova_compute[228704]: 2025-11-25 10:08:24.265 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:24.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:25 np0005534696 nova_compute[228704]: 2025-11-25 10:08:25.214 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:25.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:25 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:26 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:26 np0005534696 nova_compute[228704]: 2025-11-25 10:08:26.364 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:26 np0005534696 nova_compute[228704]: 2025-11-25 10:08:26.364 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 25 05:08:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:26.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:27 np0005534696 podman[249432]: 2025-11-25 10:08:27.326231062 +0000 UTC m=+0.039578803 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 25 05:08:27 np0005534696 nova_compute[228704]: 2025-11-25 10:08:27.361 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000009s ======
Nov 25 05:08:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:27.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Nov 25 05:08:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:28.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:29 np0005534696 nova_compute[228704]: 2025-11-25 10:08:29.266 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:29.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:30 np0005534696 nova_compute[228704]: 2025-11-25 10:08:30.215 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:30.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:31 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:31 np0005534696 nova_compute[228704]: 2025-11-25 10:08:31.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:31.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:32.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.357 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.377 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.377 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.377 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:08:33 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:08:33 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/35916024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.709 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:08:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:33.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.906 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.907 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4858MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.908 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.908 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.986 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 05:08:33 np0005534696 nova_compute[228704]: 2025-11-25 10:08:33.986 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 05:08:34 np0005534696 nova_compute[228704]: 2025-11-25 10:08:34.000 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:08:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:34 np0005534696 nova_compute[228704]: 2025-11-25 10:08:34.270 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:34 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:08:34 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/341417532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:08:34 np0005534696 nova_compute[228704]: 2025-11-25 10:08:34.337 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:08:34 np0005534696 nova_compute[228704]: 2025-11-25 10:08:34.340 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:08:34 np0005534696 nova_compute[228704]: 2025-11-25 10:08:34.359 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:08:34 np0005534696 nova_compute[228704]: 2025-11-25 10:08:34.360 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 05:08:34 np0005534696 nova_compute[228704]: 2025-11-25 10:08:34.360 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:08:34 np0005534696 nova_compute[228704]: 2025-11-25 10:08:34.361 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:34 np0005534696 nova_compute[228704]: 2025-11-25 10:08:34.361 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 25 05:08:34 np0005534696 nova_compute[228704]: 2025-11-25 10:08:34.384 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 25 05:08:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:34.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:35 np0005534696 nova_compute[228704]: 2025-11-25 10:08:35.216 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:35.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:35 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:35 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:35 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:36 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:36.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:37 np0005534696 nova_compute[228704]: 2025-11-25 10:08:37.384 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:37 np0005534696 nova_compute[228704]: 2025-11-25 10:08:37.385 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 05:08:37 np0005534696 nova_compute[228704]: 2025-11-25 10:08:37.385 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 05:08:37 np0005534696 nova_compute[228704]: 2025-11-25 10:08:37.397 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 05:08:37 np0005534696 nova_compute[228704]: 2025-11-25 10:08:37.397 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:37.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:38.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:39 np0005534696 nova_compute[228704]: 2025-11-25 10:08:39.271 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000010s ======
Nov 25 05:08:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:39.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Nov 25 05:08:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:40 np0005534696 nova_compute[228704]: 2025-11-25 10:08:40.216 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:40 np0005534696 nova_compute[228704]: 2025-11-25 10:08:40.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:40 np0005534696 nova_compute[228704]: 2025-11-25 10:08:40.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:08:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:40.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:40 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:41 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:41.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:42.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:43 np0005534696 podman[249510]: 2025-11-25 10:08:43.323092098 +0000 UTC m=+0.036014797 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 05:08:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:43.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:44 np0005534696 nova_compute[228704]: 2025-11-25 10:08:44.273 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:44.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:45 np0005534696 nova_compute[228704]: 2025-11-25 10:08:45.219 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:45.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:45 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:46.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:47.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:48 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:48 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:48.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:49 np0005534696 nova_compute[228704]: 2025-11-25 10:08:49.275 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:49 np0005534696 podman[249557]: 2025-11-25 10:08:49.342751164 +0000 UTC m=+0.052723616 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 05:08:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:49.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:50 np0005534696 nova_compute[228704]: 2025-11-25 10:08:50.221 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:50 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:50 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:50 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:50.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:51 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:51.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:52 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:52 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:52 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:52.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:53.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.950878) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333950903, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2608, "num_deletes": 503, "total_data_size": 6379973, "memory_usage": 6493248, "flush_reason": "Manual Compaction"}
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333960151, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 4012324, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31838, "largest_seqno": 34441, "table_properties": {"data_size": 4002264, "index_size": 5914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 24263, "raw_average_key_size": 20, "raw_value_size": 3980130, "raw_average_value_size": 3300, "num_data_blocks": 254, "num_entries": 1206, "num_filter_entries": 1206, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764065125, "oldest_key_time": 1764065125, "file_creation_time": 1764065333, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 9344 microseconds, and 6301 cpu microseconds.
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.960221) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 4012324 bytes OK
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.960262) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.960617) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.960627) EVENT_LOG_v1 {"time_micros": 1764065333960624, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.960657) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 6367610, prev total WAL file size 6367610, number of live WAL files 2.
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.961716) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(3918KB)], [63(14MB)]
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333961752, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 18796853, "oldest_snapshot_seqno": -1}
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6333 keys, 12739955 bytes, temperature: kUnknown
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333992824, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 12739955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12699303, "index_size": 23710, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 165417, "raw_average_key_size": 26, "raw_value_size": 12586580, "raw_average_value_size": 1987, "num_data_blocks": 937, "num_entries": 6333, "num_filter_entries": 6333, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764063170, "oldest_key_time": 0, "file_creation_time": 1764065333, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1e79b6b-6b16-46dd-99f0-cc98f5801a7e", "db_session_id": "IFRO04M9OA18QGXPWOSU", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.992967) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 12739955 bytes
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.993288) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 605.9 rd, 410.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 14.1 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(7.9) write-amplify(3.2) OK, records in: 7351, records dropped: 1018 output_compression: NoCompression
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.993301) EVENT_LOG_v1 {"time_micros": 1764065333993295, "job": 38, "event": "compaction_finished", "compaction_time_micros": 31025, "compaction_time_cpu_micros": 21450, "output_level": 6, "num_output_files": 1, "total_output_size": 12739955, "num_input_records": 7351, "num_output_records": 6333, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333993926, "job": 38, "event": "table_file_deletion", "file_number": 65}
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764065333995951, "job": 38, "event": "table_file_deletion", "file_number": 63}
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.961665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:08:53 np0005534696 ceph-mon[75508]: rocksdb: (Original Log Time 2025/11/25-10:08:53.995995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 25 05:08:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:54 np0005534696 nova_compute[228704]: 2025-11-25 10:08:54.277 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:54 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:54 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:54 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:54.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:55 np0005534696 nova_compute[228704]: 2025-11-25 10:08:55.222 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:55.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:08:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:08:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:08:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:08:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:08:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:56 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:56 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:56 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:56.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:57.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:57 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:08:57 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:08:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:58 np0005534696 podman[249669]: 2025-11-25 10:08:58.329181225 +0000 UTC m=+0.041998373 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 05:08:58 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:58 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:58 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:08:58.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 05:08:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:08:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:08:58 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 05:08:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:08:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:08:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:08:59 np0005534696 nova_compute[228704]: 2025-11-25 10:08:59.281 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:08:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:08:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:08:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:08:59.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:08:59 np0005534696 ceph-mon[75508]: Health check update: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Nov 25 05:09:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:08:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:00 np0005534696 nova_compute[228704]: 2025-11-25 10:09:00.222 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:00 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:00 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:00 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:00.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:09:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:01.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:09:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:09:01 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:09:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:02 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:02 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:02 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:02.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:03.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:03 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:03 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:03 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:03 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:04 np0005534696 nova_compute[228704]: 2025-11-25 10:09:04.282 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:04 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:04 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:04 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:04.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:05 np0005534696 nova_compute[228704]: 2025-11-25 10:09:05.223 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:09:05.362 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:09:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:09:05.363 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:09:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:09:05.363 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:09:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:09:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:05.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:09:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:06 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:06 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:06 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:06.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:07.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:07 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:07 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:07 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:07 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:08 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:08 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:08 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:08.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:09 np0005534696 nova_compute[228704]: 2025-11-25 10:09:09.284 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:09.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:10 np0005534696 nova_compute[228704]: 2025-11-25 10:09:10.223 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:10 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:10 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:10 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:10.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:11.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:11 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:11 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:11 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:11 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:12 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:12 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:12 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:12.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:13.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:14 np0005534696 nova_compute[228704]: 2025-11-25 10:09:14.286 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:14 np0005534696 podman[249753]: 2025-11-25 10:09:14.32022416 +0000 UTC m=+0.033821683 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 05:09:14 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:14 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:14 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:14.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:15 np0005534696 nova_compute[228704]: 2025-11-25 10:09:15.224 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:09:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:15.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:09:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:16 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:16 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:09:16 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:16.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:09:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:17.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:18 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:18 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:18 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:18.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:19 np0005534696 nova_compute[228704]: 2025-11-25 10:09:19.289 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:09:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:19.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:09:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:19 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:20 np0005534696 nova_compute[228704]: 2025-11-25 10:09:20.227 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:20 np0005534696 podman[249776]: 2025-11-25 10:09:20.372940624 +0000 UTC m=+0.079218110 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 05:09:20 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:20 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:20 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:21.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:22 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:22 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:22 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:22.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:23.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:23 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:23 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:23 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:24 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:24 np0005534696 nova_compute[228704]: 2025-11-25 10:09:24.291 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:24 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:24 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:24 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:24.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:25 np0005534696 nova_compute[228704]: 2025-11-25 10:09:25.228 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:09:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:25.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:09:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:26 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:26 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:09:26 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:26.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:09:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:27.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:28 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:28 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:28 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:28.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:28 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:28 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:28 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:29 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:29 np0005534696 nova_compute[228704]: 2025-11-25 10:09:29.293 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:29 np0005534696 podman[249834]: 2025-11-25 10:09:29.346031036 +0000 UTC m=+0.044605407 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd)
Nov 25 05:09:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:29.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:30 np0005534696 nova_compute[228704]: 2025-11-25 10:09:30.230 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:30 np0005534696 nova_compute[228704]: 2025-11-25 10:09:30.780 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:09:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:30 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:30 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.002000023s ======
Nov 25 05:09:30 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:30.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000023s
Nov 25 05:09:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:31.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:32 np0005534696 nova_compute[228704]: 2025-11-25 10:09:32.369 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:09:32 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:32 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:32 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:32.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:33 np0005534696 nova_compute[228704]: 2025-11-25 10:09:33.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:09:33 np0005534696 nova_compute[228704]: 2025-11-25 10:09:33.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:09:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:33.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:33 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.296 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.355 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.378 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.378 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.378 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.378 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.379 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:09:34 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:09:34 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1967994406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.747 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.970 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.972 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4883MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.972 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:09:34 np0005534696 nova_compute[228704]: 2025-11-25 10:09:34.972 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:09:34 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:34 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:09:34 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:34.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:09:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.082 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.082 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.145 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing inventories for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.199 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating ProviderTree inventory for provider e8eea1e0-1833-4152-af65-8b442fac3e0d from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.199 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Updating inventory in ProviderTree for provider e8eea1e0-1833-4152-af65-8b442fac3e0d with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.212 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing aggregate associations for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.229 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Refreshing trait associations for resource provider e8eea1e0-1833-4152-af65-8b442fac3e0d, traits: HW_CPU_X86_AVX2,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SVM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE4A,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_RESCUE_BFV,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_FMA3,HW_CPU_X86_SSE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX512VAES,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_ACCELERATORS,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.232 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.248 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.607 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.613 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.629 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.632 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 05:09:35 np0005534696 nova_compute[228704]: 2025-11-25 10:09:35.632 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:09:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:35.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:36 np0005534696 nova_compute[228704]: 2025-11-25 10:09:36.633 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:09:36 np0005534696 nova_compute[228704]: 2025-11-25 10:09:36.634 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 05:09:36 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:36 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:09:36 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:36.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:09:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:09:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:37.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:09:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:38 np0005534696 nova_compute[228704]: 2025-11-25 10:09:38.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:09:38 np0005534696 nova_compute[228704]: 2025-11-25 10:09:38.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 05:09:38 np0005534696 nova_compute[228704]: 2025-11-25 10:09:38.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 05:09:38 np0005534696 nova_compute[228704]: 2025-11-25 10:09:38.368 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 05:09:38 np0005534696 nova_compute[228704]: 2025-11-25 10:09:38.368 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:09:38 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:38 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:09:38 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:38.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:09:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:38 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:38 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:38 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:39 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:39 np0005534696 nova_compute[228704]: 2025-11-25 10:09:39.298 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:39.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:40 np0005534696 nova_compute[228704]: 2025-11-25 10:09:40.233 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:40 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:40 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:09:40 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:40.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:09:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:41.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:42 np0005534696 nova_compute[228704]: 2025-11-25 10:09:42.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:09:42 np0005534696 nova_compute[228704]: 2025-11-25 10:09:42.357 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:09:42 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:42 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:42 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:42.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.002000023s ======
Nov 25 05:09:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:43.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000023s
Nov 25 05:09:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:44 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:44 np0005534696 nova_compute[228704]: 2025-11-25 10:09:44.300 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:44 np0005534696 podman[249934]: 2025-11-25 10:09:44.548249131 +0000 UTC m=+0.042974649 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 05:09:44 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:44 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:09:44 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:44.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:09:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:45 np0005534696 nova_compute[228704]: 2025-11-25 10:09:45.236 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:45.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:46 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:46 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:46 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:46.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:47.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:48 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:48 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:48 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:48 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:09:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:09:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:49 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:49 np0005534696 nova_compute[228704]: 2025-11-25 10:09:49.304 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:09:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:49.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:09:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:50 np0005534696 nova_compute[228704]: 2025-11-25 10:09:50.239 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:09:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:51.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:09:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:51 np0005534696 podman[249958]: 2025-11-25 10:09:51.379318172 +0000 UTC m=+0.076199446 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, config_id=ovn_controller)
Nov 25 05:09:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:51.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:53.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:53.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:53 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:54 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:54 np0005534696 nova_compute[228704]: 2025-11-25 10:09:54.306 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:09:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:55.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:09:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:55 np0005534696 nova_compute[228704]: 2025-11-25 10:09:55.241 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:55.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:09:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:09:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:09:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:09:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:09:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:09:59 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:09:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:09:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:09:59.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:09:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:09:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:09:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:09:59 np0005534696 nova_compute[228704]: 2025-11-25 10:09:59.308 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:09:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:09:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:09:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:09:59.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:00 np0005534696 nova_compute[228704]: 2025-11-25 10:10:00.243 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:00 np0005534696 podman[249990]: 2025-11-25 10:10:00.339351422 +0000 UTC m=+0.049609817 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 05:10:00 np0005534696 ceph-mon[75508]: Health detail: HEALTH_WARN 2 failed cephadm daemon(s)
Nov 25 05:10:00 np0005534696 ceph-mon[75508]: [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Nov 25 05:10:00 np0005534696 ceph-mon[75508]:    daemon nfs.cephfs.2.0.compute-0.rychik on compute-0 is in error state
Nov 25 05:10:00 np0005534696 ceph-mon[75508]:    daemon nfs.cephfs.0.0.compute-1.yfzsxe on compute-1 is in error state
Nov 25 05:10:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:01.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:10:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:01.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:10:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 05:10:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:10:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:10:02 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 05:10:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:03.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:10:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:03.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:10:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:03 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:03 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:03 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:04 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:04 np0005534696 nova_compute[228704]: 2025-11-25 10:10:04.312 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:05.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:05 np0005534696 nova_compute[228704]: 2025-11-25 10:10:05.245 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:10:05.363 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:10:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:10:05.364 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:10:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:10:05.364 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:10:05 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:10:05 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:10:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:05.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:10:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:07.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:10:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:10:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:07.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:10:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:09 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:09.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:09 np0005534696 nova_compute[228704]: 2025-11-25 10:10:09.315 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:09.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:10 np0005534696 nova_compute[228704]: 2025-11-25 10:10:10.246 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:10:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:11.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:10:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:11.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:13.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:10:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:13.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:10:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:13 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:14 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:14 np0005534696 nova_compute[228704]: 2025-11-25 10:10:14.317 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:15.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:15 np0005534696 nova_compute[228704]: 2025-11-25 10:10:15.247 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:15 np0005534696 podman[250151]: 2025-11-25 10:10:15.352562967 +0000 UTC m=+0.053993966 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 05:10:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:15.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:10:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:17.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:10:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:18 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:18 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:18 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:18 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:19.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:19 np0005534696 nova_compute[228704]: 2025-11-25 10:10:19.320 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:19.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:20 np0005534696 nova_compute[228704]: 2025-11-25 10:10:20.249 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:21.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:10:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:21.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:10:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:22 np0005534696 podman[250174]: 2025-11-25 10:10:22.381498179 +0000 UTC m=+0.088899335 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 05:10:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:22 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:22 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:22 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:22 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:23.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:10:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:23.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:10:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:24 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:24 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:24 np0005534696 nova_compute[228704]: 2025-11-25 10:10:24.322 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:25.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:25 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:25 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:25 np0005534696 nova_compute[228704]: 2025-11-25 10:10:25.250 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:25 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 05:10:25 np0005534696 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 05:10:25 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:25 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:25 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:25.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:25 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:26 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:26 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:26 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:26 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:26 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:26 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:27 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:27 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:27 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:27 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:27 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:27.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:28 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:28 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:28 np0005534696 nova_compute[228704]: 2025-11-25 10:10:28.352 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:10:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:29.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:29 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:29 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:29 np0005534696 nova_compute[228704]: 2025-11-25 10:10:29.324 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:29 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:29 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:29 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:29.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:30 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:30 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:30 np0005534696 nova_compute[228704]: 2025-11-25 10:10:30.252 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:30 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:30 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:31.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:31 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:31 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:31 np0005534696 podman[250232]: 2025-11-25 10:10:31.344380943 +0000 UTC m=+0.050133775 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 05:10:31 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:31 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:31 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:31.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:32 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:32 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:32 np0005534696 nova_compute[228704]: 2025-11-25 10:10:32.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:10:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:33.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:33 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:33 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:33 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:33 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:10:33 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:33.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:10:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:34 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:34 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:34 np0005534696 nova_compute[228704]: 2025-11-25 10:10:34.327 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:34 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:10:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:35.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:10:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:35 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:35 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:35 np0005534696 nova_compute[228704]: 2025-11-25 10:10:35.254 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:35 np0005534696 nova_compute[228704]: 2025-11-25 10:10:35.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:10:35 np0005534696 nova_compute[228704]: 2025-11-25 10:10:35.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:10:35 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:35 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:35 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:35.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:35 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:36 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:36 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.357 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.357 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.358 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.375 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.376 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.376 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.376 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.376 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:10:36 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:10:36 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2033946624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.754 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.995 228708 WARNING nova.virt.libvirt.driver [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.997 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4881MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.998 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:10:36 np0005534696 nova_compute[228704]: 2025-11-25 10:10:36.998 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:10:37 np0005534696 nova_compute[228704]: 2025-11-25 10:10:37.049 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 25 05:10:37 np0005534696 nova_compute[228704]: 2025-11-25 10:10:37.049 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7681MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 25 05:10:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:10:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:37.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:10:37 np0005534696 nova_compute[228704]: 2025-11-25 10:10:37.063 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 25 05:10:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:37 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:37 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:37 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Nov 25 05:10:37 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1927464700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 25 05:10:37 np0005534696 nova_compute[228704]: 2025-11-25 10:10:37.425 228708 DEBUG oslo_concurrency.processutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 25 05:10:37 np0005534696 nova_compute[228704]: 2025-11-25 10:10:37.429 228708 DEBUG nova.compute.provider_tree [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed in ProviderTree for provider: e8eea1e0-1833-4152-af65-8b442fac3e0d update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 25 05:10:37 np0005534696 nova_compute[228704]: 2025-11-25 10:10:37.440 228708 DEBUG nova.scheduler.client.report [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Inventory has not changed for provider e8eea1e0-1833-4152-af65-8b442fac3e0d based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7681, 'reserved': 512, 'min_unit': 1, 'max_unit': 7681, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 25 05:10:37 np0005534696 nova_compute[228704]: 2025-11-25 10:10:37.441 228708 DEBUG nova.compute.resource_tracker [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 25 05:10:37 np0005534696 nova_compute[228704]: 2025-11-25 10:10:37.441 228708 DEBUG oslo_concurrency.lockutils [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:10:37 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:37 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:37 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:37.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:38 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:38 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:38 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:38 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:38 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:38 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:39.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:39 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:39 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:39 np0005534696 nova_compute[228704]: 2025-11-25 10:10:39.328 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:39 np0005534696 nova_compute[228704]: 2025-11-25 10:10:39.441 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:10:39 np0005534696 nova_compute[228704]: 2025-11-25 10:10:39.441 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 25 05:10:39 np0005534696 nova_compute[228704]: 2025-11-25 10:10:39.441 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 25 05:10:39 np0005534696 nova_compute[228704]: 2025-11-25 10:10:39.452 228708 DEBUG nova.compute.manager [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 25 05:10:39 np0005534696 nova_compute[228704]: 2025-11-25 10:10:39.452 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:10:39 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:39 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:39 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:39.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:40 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:40 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:40 np0005534696 nova_compute[228704]: 2025-11-25 10:10:40.255 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:40 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:41.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:41 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:41 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:41 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:41 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:41 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:41.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:42 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:42 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:42 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:42 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:42 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:42 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:43.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:43 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:43 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:43 np0005534696 nova_compute[228704]: 2025-11-25 10:10:43.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:10:43 np0005534696 nova_compute[228704]: 2025-11-25 10:10:43.356 228708 DEBUG oslo_service.periodic_task [None req-0deb1b37-94fd-4c50-80cc-f33df7deea9f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 25 05:10:43 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:43 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:43 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:43.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:44 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:44 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:44 np0005534696 nova_compute[228704]: 2025-11-25 10:10:44.330 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:45.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:45 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:45 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:45 np0005534696 nova_compute[228704]: 2025-11-25 10:10:45.256 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:45 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:45 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:45 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:45.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:45 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:46 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:46 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:46 np0005534696 podman[250333]: 2025-11-25 10:10:46.325414562 +0000 UTC m=+0.033302245 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 25 05:10:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:46 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:10:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:47.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:10:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:47 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:47 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:47 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:47 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:47 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:47.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:48 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:48 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:10:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:10:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:49 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:49 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:49 np0005534696 nova_compute[228704]: 2025-11-25 10:10:49.333 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:49 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:49 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:49 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:49.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:50 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:50 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:50 np0005534696 nova_compute[228704]: 2025-11-25 10:10:50.257 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:50 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:50 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:51 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:51 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:51 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:51 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:51 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:51 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:51.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:52 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:52 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:53 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:53 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:53.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:53 np0005534696 podman[250356]: 2025-11-25 10:10:53.350236625 +0000 UTC m=+0.062946176 container health_status dbe8bebdaef8769bc4bedbbb9852289de7f8834e3e76ba51c5426140d3be44a8 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:af46761060c7987e1dee5f14c06d85b46f12ad8e09c83d4246ab4e3a65dfda3e', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 05:10:53 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:53 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:53 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:53.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:54 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:54 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:54 np0005534696 nova_compute[228704]: 2025-11-25 10:10:54.335 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:55 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:55 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:55.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:55 np0005534696 nova_compute[228704]: 2025-11-25 10:10:55.259 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:55 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:55 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:55 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:55.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:55 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:10:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:10:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:10:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:55 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:10:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:10:56 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:10:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:56 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:56 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:57 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:57 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:57.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:57 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:57 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:57 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:57.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:58 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:58 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:10:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:59 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:10:59 2025: (VI_0) received an invalid passwd!
Nov 25 05:10:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:10:59.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:10:59 np0005534696 nova_compute[228704]: 2025-11-25 10:10:59.338 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:10:59 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:10:59 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:10:59 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:10:59.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:00 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:00 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:00 np0005534696 nova_compute[228704]: 2025-11-25 10:11:00.259 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:11:00 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:11:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:11:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:11:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:00 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:11:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:01 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:11:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:01 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:01 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:01.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:01 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:01 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:01 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:01.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:01 np0005534696 systemd-logind[744]: New session 57 of user zuul.
Nov 25 05:11:01 np0005534696 systemd[1]: Started Session 57 of User zuul.
Nov 25 05:11:02 np0005534696 podman[250391]: 2025-11-25 10:11:02.011326916 +0000 UTC m=+0.066487525 container health_status 0ef242e8ddb34cae11c62ea59d54a62783bf80eb1c2c5f7e03cd1e90f2fdc99d (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6f86db36d668348be8c5b46dcda8b1fa23d34bfdc07164fbcbe7a6327fb4de24', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 05:11:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:02 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:02 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:03 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:03 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:03.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:03 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:03 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:03 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:03.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:04 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:04 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:04 np0005534696 nova_compute[228704]: 2025-11-25 10:11:04.340 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:11:04 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Nov 25 05:11:04 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1775628257' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 25 05:11:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:05 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:05 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:05.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:05 np0005534696 nova_compute[228704]: 2025-11-25 10:11:05.260 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:11:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:11:05.364 142676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 25 05:11:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:11:05.369 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 25 05:11:05 np0005534696 ovn_metadata_agent[142671]: 2025-11-25 10:11:05.370 142676 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 25 05:11:05 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:05 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:05 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:05.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:05 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:11:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:11:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:11:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:05 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:11:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:06 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:11:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:06 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:06 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:07 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:07 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:07.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:07 np0005534696 ovs-vsctl[250810]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 05:11:07 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:07 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:07 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:07.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:07 np0005534696 virtqemud[228342]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 05:11:07 np0005534696 virtqemud[228342]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 05:11:08 np0005534696 virtqemud[228342]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 05:11:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:08 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:08 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:08 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: cache status {prefix=cache status} (starting...)
Nov 25 05:11:08 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: client ls {prefix=client ls} (starting...)
Nov 25 05:11:08 np0005534696 lvm[251109]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 25 05:11:08 np0005534696 lvm[251109]: VG ceph_vg0 finished
Nov 25 05:11:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:09 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:09 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:09.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:09 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:11:09 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:11:09 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 25 05:11:09 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:11:09 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:11:09 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 25 05:11:09 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: damage ls {prefix=damage ls} (starting...)
Nov 25 05:11:09 np0005534696 nova_compute[228704]: 2025-11-25 10:11:09.342 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:11:09 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump loads {prefix=dump loads} (starting...)
Nov 25 05:11:09 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 25 05:11:09 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 25 05:11:09 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 25 05:11:09 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2787533441' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 25 05:11:09 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 25 05:11:09 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 25 05:11:09 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:09 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:09 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:09.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:09 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 25 05:11:10 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 25 05:11:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:10 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:10 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:10 np0005534696 nova_compute[228704]: 2025-11-25 10:11:10.261 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:11:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Nov 25 05:11:10 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2543856292' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 25 05:11:10 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: ops {prefix=ops} (starting...)
Nov 25 05:11:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Nov 25 05:11:10 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3634701221' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 25 05:11:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 05:11:10 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/62713454' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 05:11:10 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: session ls {prefix=session ls} (starting...)
Nov 25 05:11:10 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:11:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:11:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:11:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:10 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:11:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:11 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:11:11 np0005534696 ceph-mds[84744]: mds.cephfs.compute-2.pwazzx asok_command: status {prefix=status} (starting...)
Nov 25 05:11:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:11 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:11 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000012s ======
Nov 25 05:11:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:11.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Nov 25 05:11:11 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 05:11:11 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2725417152' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 05:11:11 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 05:11:11 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4244050022' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 05:11:11 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Nov 25 05:11:11 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1401436123' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 25 05:11:11 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Nov 25 05:11:11 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/814928782' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 25 05:11:11 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:11 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:11 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:11.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:12 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:12 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:12 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 05:11:12 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694499281' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 05:11:12 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 05:11:12 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3866070346' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 05:11:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:13 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:13 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:13.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:13 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Nov 25 05:11:13 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2477092599' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 25 05:11:13 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Nov 25 05:11:13 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2056844208' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 25 05:11:13 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:13 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:13 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:13.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:14 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:14 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:14 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Nov 25 05:11:14 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3373365784' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 25 05:11:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:11:14 np0005534696 ceph-mon[75508]: from='mgr.14661 192.168.122.100:0/3894637691' entity='mgr.compute-0.zcfgby' 
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91725c00 session 0x561f934783c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871795 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 1024000 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 999424 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871795 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 983040 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 983040 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.601573944s of 38.603164673s, submitted: 1
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 942080 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871927 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72482816 unmapped: 876544 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 868352 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 860160 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 873455 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 811008 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 811008 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 811008 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874967 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.004404068s of 12.014169693s, submitted: 11
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874360 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 704512 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874228 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874228 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 671744 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 671744 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 671744 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874228 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 655360 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 655360 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91647400 session 0x561f9208c780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 638976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874228 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 630784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 630784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874228 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.241825104s of 32.244922638s, submitted: 2
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 606208 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874360 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 581632 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 548864 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 548864 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 540672 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 532480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877400 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 499712 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 499712 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 483328 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876641 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.004043579s of 12.013555527s, submitted: 12
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 475136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 475136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 475136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 458752 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 450560 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 450560 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 450560 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 442368 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 434176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 417792 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 417792 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 409600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 393216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 393216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 393216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 385024 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 385024 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 385024 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 376832 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 368640 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91724800 session 0x561f916d1680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 352256 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 335872 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 335872 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876070 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 327680 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 327680 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.972728729s of 36.975532532s, submitted: 2
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876202 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877730 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 278528 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 262144 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 262144 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 253952 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.007935524s of 12.018008232s, submitted: 11
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876532 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 204800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 204800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 204800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 196608 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 172032 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91725400 session 0x561f9347a960
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876400 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 33.416950226s of 33.419124603s, submitted: 2
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 172032 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 172032 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876532 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 163840 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73195520 unmapped: 163840 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 139264 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 878060 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 139264 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 131072 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 114688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877301 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.548579216s of 14.557904243s, submitted: 12
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 81920 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877321 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 65536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 65536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 65536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 57344 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 57344 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877321 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f90b34000 session 0x561f931d5680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 57344 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 49152 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 49152 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 40960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 16384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877321 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 16384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877321 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1040384 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1040384 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73367552 unmapped: 1040384 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877321 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.721147537s of 24.722063065s, submitted: 1
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1015808 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877453 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 2064384 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 2056192 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 2056192 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 2048000 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 2039808 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880493 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 2039808 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 2023424 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91724c00 session 0x561f94066f00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 2023424 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.950899124s of 10.961990356s, submitted: 12
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 2023424 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 2007040 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879754 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 2007040 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 1998848 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 1998848 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 1990656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 1990656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879754 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 1990656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 1982464 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 1982464 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 1966080 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 1966080 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 879886 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 1966080 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.868379593s of 12.871901512s, submitted: 3
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 1990656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 1990656 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 1982464 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 917504 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881414 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 917504 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 909312 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74555392 unmapped: 901120 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 892928 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 892928 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880655 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 884736 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 884736 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 876544 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.448137283s of 12.458023071s, submitted: 11
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 860160 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 860160 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 851968 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 851968 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 851968 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 843776 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 835584 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 827392 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 827392 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 827392 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 819200 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 802816 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 794624 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 794624 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 786432 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 786432 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 778240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 778240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 778240 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 770048 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 770048 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 761856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 761856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 761856 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 753664 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 753664 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 753664 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 745472 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 745472 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 737280 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5911 writes, 25K keys, 5911 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5911 writes, 1013 syncs, 5.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5911 writes, 25K keys, 5911 commit groups, 1.0 writes per commit group, ingest: 19.14 MB, 0.03 MB/s#012Interval WAL: 5911 writes, 1013 syncs, 5.84 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 679936 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 663552 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 663552 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 663552 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 655360 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 655360 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 647168 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 647168 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 638976 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 638976 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 638976 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 630784 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 630784 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 622592 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 614400 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 614400 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 598016 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 598016 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 589824 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 589824 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 589824 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 581632 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 52.580356598s of 52.581237793s, submitted: 1
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 581632 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 1884160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 1884160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 1884160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 1884160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 1884160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 1867776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 1859584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 1835008 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 1826816 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 1818624 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 1818624 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 1802240 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 1802240 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 1794048 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 1785856 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 1785856 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 1777664 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 1769472 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1761280 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1761280 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1761280 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1753088 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1753088 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1753088 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 1744896 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1728512 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1728512 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1728512 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 1720320 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 1720320 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 1720320 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 1695744 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 1695744 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 1687552 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 1687552 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 1679360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 1679360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 1679360 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 1671168 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 1671168 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1662976 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 1662976 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 1654784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 1654784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 77996032 unmapped: 1654784 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1646592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1646592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78004224 unmapped: 1646592 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 1638400 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 1630208 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 1622016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 1622016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 1622016 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 1613824 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 1613824 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 1605632 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 1605632 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1597440 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1597440 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 1597440 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1572864 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1572864 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1572864 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1564672 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1564672 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1548288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 1548288 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1540096 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1531904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1531904 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1523712 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1515520 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1507328 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1507328 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 1507328 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 1499136 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91647400 session 0x561f9208c960
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 1490944 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1482752 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 1482752 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1474560 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880084 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 1466368 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 1466368 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 98.924819946s of 99.062004089s, submitted: 246
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 880216 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 1441792 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881744 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881744 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.009870529s of 12.023555756s, submitted: 10
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881137 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 1425408 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 1417216 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1409024 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1400832 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91725c00 session 0x561f94067680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881005 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 1392640 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 77.978752136s of 77.981117249s, submitted: 2
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881137 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1376256 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 1351680 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884177 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1343488 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 883418 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91725400 session 0x561f93724f00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 1327104 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.983892441s of 14.993802071s, submitted: 12
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 883438 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 883438 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78331904 unmapped: 1318912 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 1302528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885098 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 1302528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78348288 unmapped: 1302528 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.811387062s of 12.817705154s, submitted: 7
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 1286144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 1286144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78364672 unmapped: 1286144 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886610 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 1269760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 1269760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 1269760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78381056 unmapped: 1269760 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 1261568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885412 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f93ee1c00 session 0x561f92574780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 1261568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 1261568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 1261568 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885280 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885280 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.171207428s of 18.178813934s, submitted: 9
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885428 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78397440 unmapped: 1253376 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1236992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1236992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78413824 unmapped: 1236992 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 1228800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884078 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 1228800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 1228800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78422016 unmapped: 1228800 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.994335175s of 14.004615784s, submitted: 11
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884098 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91647400 session 0x561f9485ad20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884098 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884098 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.246137619s of 14.247477531s, submitted: 1
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 884230 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78430208 unmapped: 1220608 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 1204224 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 1204224 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78446592 unmapped: 1204224 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 1196032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885758 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78454784 unmapped: 1196032 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78462976 unmapped: 1187840 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78471168 unmapped: 1179648 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1171456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1171456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885758 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1171456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1171456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78479360 unmapped: 1171456 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1155072 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1155072 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885758 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.728876114s of 16.739055634s, submitted: 10
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1138688 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1122304 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91724800 session 0x561f937670e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1114112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78536704 unmapped: 1114112 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1097728 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885610 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.609428406s of 35.610633850s, submitted: 1
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1089536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1089536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1089536 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 887270 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1040384 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 887270 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.006770134s of 12.016820908s, submitted: 10
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886663 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1032192 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 1015808 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 991232 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 974848 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 966656 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91724c00 session 0x561f94839c20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f90b34000 session 0x561f94066780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 950272 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 942080 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 933888 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 886531 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 57.620494843s of 57.623191833s, submitted: 2
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78733312 unmapped: 917504 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78733312 unmapped: 917504 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78741504 unmapped: 909312 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 888323 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91725c00 session 0x561f94878d20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 876544 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889835 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.819118500s of 10.832632065s, submitted: 12
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889228 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 851968 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889112 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78749696 unmapped: 901120 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.820836067s of 11.834323883s, submitted: 10
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 884736 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892136 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890938 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890806 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890806 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890806 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 868352 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 ms_handle_reset con 0x561f91647400 session 0x561f934812c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890806 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890806 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78790656 unmapped: 860160 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 36.885120392s of 36.895862579s, submitted: 8
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890938 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78807040 unmapped: 843776 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 835584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 835584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 835584 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890954 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 827392 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 827392 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 827392 heap: 79650816 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890195 data_alloc: 218103808 data_used: 49152
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79912960 unmapped: 786432 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.011589050s of 12.020524979s, submitted: 11
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889756 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889624 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78864384 unmapped: 1835008 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.008609772s of 35.010581970s, submitted: 2
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 1826816 heap: 80699392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 137 ms_handle_reset con 0x561f91724c00 session 0x561f9485be00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fca7d000/0x0/0x4ffc00000, data 0xf7246/0x19f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 88260608 unmapped: 1753088 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 139 ms_handle_reset con 0x561f93ee1c00 session 0x561f943dd2c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 78995456 unmapped: 11018240 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 140 ms_handle_reset con 0x561f93ee0c00 session 0x561f948a14a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 10928128 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939800 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79085568 unmapped: 10928128 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fc5fd000/0x0/0x4ffc00000, data 0x56f6c8/0x61e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 140 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5f9000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943642 data_alloc: 218103808 data_used: 57344
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f90b34000 session 0x561f948781e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.949914932s of 10.983639717s, submitted: 69
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 10911744 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5f9000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 10911744 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 10911744 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942170 data_alloc: 218103808 data_used: 57344
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942186 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.442098618s of 10.445603371s, submitted: 4
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942318 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f91724800 session 0x561f948a52c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79060992 unmapped: 10952704 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942018 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942170 data_alloc: 218103808 data_used: 57344
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.351478577s of 14.358352661s, submitted: 8
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 10936320 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f93ee1c00 session 0x561f943dde00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f93463800 session 0x561f943dd2c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f91727800 session 0x561f943dc780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 79093760 unmapped: 10919936 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942186 data_alloc: 218103808 data_used: 53248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f90b34000 session 0x561f943dd4a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f91724800 session 0x561f916d12c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 6225920 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x57169a/0x621000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 ms_handle_reset con 0x561f91727800 session 0x561f944245a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 6225920 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 6225920 heap: 90013696 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f93463800 session 0x561f94424780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f93ee1c00 session 0x561f948a1860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f90b34000 session 0x561f94425e00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f91724800 session 0x561f943dcb40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f91727800 session 0x561f94424f00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 17776640 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f93463800 session 0x561f93472d20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 17776640 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054382 data_alloc: 218103808 data_used: 4714496
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbb15000/0x0/0x4ffc00000, data 0x1052938/0x1106000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f93ee1c00 session 0x561f91fb7860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 17768448 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f90b34000 session 0x561f94838780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 ms_handle_reset con 0x561f91724800 session 0x561f948a14a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 17702912 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 95543296 unmapped: 8183808 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96468992 unmapped: 7258112 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.981223106s of 12.044069290s, submitted: 85
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96468992 unmapped: 7258112 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132380 data_alloc: 234881024 data_used: 15904768
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbaf1000/0x0/0x4ffc00000, data 0x1076948/0x112b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 7274496 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 7274496 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 7274496 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 7274496 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96452608 unmapped: 7274496 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135846 data_alloc: 234881024 data_used: 15904768
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbaed000/0x0/0x4ffc00000, data 0x107891a/0x112e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96468992 unmapped: 7258112 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 96477184 unmapped: 7249920 heap: 103727104 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104882176 unmapped: 3047424 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbaee000/0x0/0x4ffc00000, data 0x107891a/0x112e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9956000/0x0/0x4ffc00000, data 0x207091a/0x2126000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103817216 unmapped: 4112384 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9956000/0x0/0x4ffc00000, data 0x207091a/0x2126000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103849984 unmapped: 4079616 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262578 data_alloc: 234881024 data_used: 16596992
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103849984 unmapped: 4079616 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103849984 unmapped: 4079616 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103858176 unmapped: 4071424 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9956000/0x0/0x4ffc00000, data 0x207091a/0x2126000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 103899136 unmapped: 4030464 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.577157021s of 14.672077179s, submitted: 163
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 3784704 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261510 data_alloc: 234881024 data_used: 16601088
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 3784704 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 3784704 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x209091a/0x2146000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 3751936 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104177664 unmapped: 3751936 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x209091a/0x2146000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104210432 unmapped: 3719168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261510 data_alloc: 234881024 data_used: 16601088
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9936000/0x0/0x4ffc00000, data 0x209091a/0x2146000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261738 data_alloc: 234881024 data_used: 16601088
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9926000/0x0/0x4ffc00000, data 0x20a091a/0x2156000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104415232 unmapped: 3514368 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.475128174s of 12.481987953s, submitted: 7
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91975c00 session 0x561f944243c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f94401400 session 0x561f94067680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104439808 unmapped: 3489792 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f94401c00 session 0x561f920c2000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f94838000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f91630f00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f91630b40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f91631860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282090 data_alloc: 234881024 data_used: 16601088
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924ccc00 session 0x561f91f5d0e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x223997c/0x22f0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f93479a40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f93479e00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f934783c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104275968 unmapped: 4702208 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f978b000/0x0/0x4ffc00000, data 0x223999f/0x22f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 4308992 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 4308992 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286323 data_alloc: 234881024 data_used: 17137664
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f978b000/0x0/0x4ffc00000, data 0x223999f/0x22f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 4308992 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 7310 writes, 28K keys, 7310 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7310 writes, 1661 syncs, 4.40 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1399 writes, 3452 keys, 1399 commit groups, 1.0 writes per commit group, ingest: 2.70 MB, 0.00 MB/s#012Interval WAL: 1399 writes, 648 syncs, 2.16 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561f8fd4b350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memta
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f978b000/0x0/0x4ffc00000, data 0x223999f/0x22f1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.559023857s of 12.601721764s, submitted: 51
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286947 data_alloc: 234881024 data_used: 17141760
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9788000/0x0/0x4ffc00000, data 0x223c99f/0x22f4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104677376 unmapped: 4300800 heap: 108978176 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106471424 unmapped: 3563520 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 3145728 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1331257 data_alloc: 234881024 data_used: 17174528
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92c9000/0x0/0x4ffc00000, data 0x26fb99f/0x27b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 3145728 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106889216 unmapped: 3145728 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 3063808 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 3063808 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92c6000/0x0/0x4ffc00000, data 0x26fe99f/0x27b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 3063808 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330449 data_alloc: 234881024 data_used: 17178624
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92c6000/0x0/0x4ffc00000, data 0x26fe99f/0x27b6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 3063808 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106971136 unmapped: 3063808 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 3055616 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.937485695s of 14.987780571s, submitted: 64
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f943e23c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106979328 unmapped: 3055616 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc800 session 0x561f920c2000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 3358720 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1273125 data_alloc: 234881024 data_used: 16601088
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9508000/0x0/0x4ffc00000, data 0x20ac91a/0x2162000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 3358720 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 3358720 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 3358720 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 106676224 unmapped: 3358720 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f948821e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93463800 session 0x561f94882960
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f91630d20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 98107392 unmapped: 11927552 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992438 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fad8d000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97787904 unmapped: 12247040 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992438 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fad8d000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992438 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fad8d000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f948385a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fad8d000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97796096 unmapped: 12238848 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992438 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12230656 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12230656 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fad8d000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12230656 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12230656 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.600938797s of 25.782604218s, submitted: 338
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f925725a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f9346c1e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97804288 unmapped: 12230656 heap: 110034944 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992146 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f93482960
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f93724960
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 27222016 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d90000/0x0/0x4ffc00000, data 0x18288fa/0x18dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d90000/0x0/0x4ffc00000, data 0x18288fa/0x18dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97648640 unmapped: 27222016 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97656832 unmapped: 27213824 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 27205632 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97665024 unmapped: 27205632 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1132945 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97198080 unmapped: 27672576 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93463800 session 0x561f948a1e00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 97361920 unmapped: 27508736 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d6b000/0x0/0x4ffc00000, data 0x184c91d/0x1901000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110190592 unmapped: 14680064 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 14614528 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 14614528 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1268206 data_alloc: 234881024 data_used: 24109056
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 14614528 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d6b000/0x0/0x4ffc00000, data 0x184c91d/0x1901000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d6b000/0x0/0x4ffc00000, data 0x184c91d/0x1901000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.134346008s of 12.186895370s, submitted: 60
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 14548992 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d6b000/0x0/0x4ffc00000, data 0x184c91d/0x1901000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 14548992 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d6b000/0x0/0x4ffc00000, data 0x184c91d/0x1901000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 14548992 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 14548992 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267551 data_alloc: 234881024 data_used: 24117248
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 14548992 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119627776 unmapped: 5242880 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119857152 unmapped: 5013504 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f956e000/0x0/0x4ffc00000, data 0x204991d/0x20fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 4972544 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 4972544 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355601 data_alloc: 234881024 data_used: 25296896
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 4972544 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 4972544 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 4972544 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f956e000/0x0/0x4ffc00000, data 0x204991d/0x20fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.498706818s of 11.555447578s, submitted: 107
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349473 data_alloc: 234881024 data_used: 25309184
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91725c00 session 0x561f93478d20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 6234112 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f954d000/0x0/0x4ffc00000, data 0x206a91d/0x211f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349721 data_alloc: 234881024 data_used: 25309184
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9547000/0x0/0x4ffc00000, data 0x207091d/0x2125000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.617301941s of 11.622908592s, submitted: 4
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349809 data_alloc: 234881024 data_used: 25309184
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 6201344 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118710272 unmapped: 6160384 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118710272 unmapped: 6160384 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118710272 unmapped: 6160384 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9544000/0x0/0x4ffc00000, data 0x207391d/0x2128000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f948383c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f948a12c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f943e34a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104890368 unmapped: 19980288 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010535 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104890368 unmapped: 19980288 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104161280 unmapped: 20709376 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104161280 unmapped: 20709376 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb01c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104161280 unmapped: 20709376 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104194048 unmapped: 20676608 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011479 data_alloc: 218103808 data_used: 4718592
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104194048 unmapped: 20676608 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104194048 unmapped: 20676608 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.913671494s of 12.943838120s, submitted: 48
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010720 data_alloc: 218103808 data_used: 4718592
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010740 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104202240 unmapped: 20668416 heap: 124870656 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f931d7680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93463800 session 0x561f936c81e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f931d5e00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f948a0780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.803101540s of 10.806558609s, submitted: 3
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f94066f00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f943e2780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee1c00 session 0x561f943e21e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f937661e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f93767860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137748 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9f50000/0x0/0x4ffc00000, data 0x16688a8/0x171c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9f50000/0x0/0x4ffc00000, data 0x16688a8/0x171c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f93767c20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f940665a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104718336 unmapped: 33325056 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee1000 session 0x561f940661e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f940663c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 33996800 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138722 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 104062976 unmapped: 33980416 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9f4f000/0x0/0x4ffc00000, data 0x16688b8/0x171d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257282 data_alloc: 234881024 data_used: 22204416
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111534080 unmapped: 26509312 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9f4f000/0x0/0x4ffc00000, data 0x16688b8/0x171d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 26476544 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111566848 unmapped: 26476544 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.129447937s of 16.155773163s, submitted: 23
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 123609088 unmapped: 14434304 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1383618 data_alloc: 234881024 data_used: 22441984
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 15581184 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 15581184 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122462208 unmapped: 15581184 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d7f000/0x0/0x4ffc00000, data 0x28388b8/0x28ed000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 15548416 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122494976 unmapped: 15548416 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393134 data_alloc: 234881024 data_used: 22614016
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 16302080 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d5b000/0x0/0x4ffc00000, data 0x285c8b8/0x2911000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 16302080 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121741312 unmapped: 16302080 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121749504 unmapped: 16293888 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f948a43c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121749504 unmapped: 16293888 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390878 data_alloc: 234881024 data_used: 22614016
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d5b000/0x0/0x4ffc00000, data 0x285c8b8/0x2911000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121749504 unmapped: 16293888 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d5b000/0x0/0x4ffc00000, data 0x285c8b8/0x2911000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.868580818s of 11.958714485s, submitted: 169
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121872384 unmapped: 16171008 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121872384 unmapped: 16171008 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121872384 unmapped: 16171008 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d50000/0x0/0x4ffc00000, data 0x28678b8/0x291c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 16138240 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391046 data_alloc: 234881024 data_used: 22614016
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121905152 unmapped: 16138240 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121913344 unmapped: 16130048 heap: 138043392 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f91f5d2c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d50000/0x0/0x4ffc00000, data 0x28678b8/0x291c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 21225472 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8269000/0x0/0x4ffc00000, data 0x334e8b8/0x3403000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120504320 unmapped: 21217280 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8269000/0x0/0x4ffc00000, data 0x334e8b8/0x3403000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 21348352 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470344 data_alloc: 234881024 data_used: 22614016
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 21348352 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 21348352 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.052414894s of 11.076602936s, submitted: 19
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120373248 unmapped: 21348352 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f934d7000 session 0x561f91625a40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e000 session 0x561f93478d20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 21340160 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f943dd860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f916d14a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 21020672 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476050 data_alloc: 234881024 data_used: 22609920
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8265000/0x0/0x4ffc00000, data 0x33518c8/0x3407000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120061952 unmapped: 21659648 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 11460608 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130260992 unmapped: 11460608 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 11436032 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 11436032 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1552051 data_alloc: 251658240 data_used: 34033664
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130285568 unmapped: 11436032 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8241000/0x0/0x4ffc00000, data 0x33758c8/0x342b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 11411456 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 11411456 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f934d6400 session 0x561f91f5dc20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.968343735s of 10.983410835s, submitted: 19
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 11411456 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f823e000/0x0/0x4ffc00000, data 0x33768c8/0x342c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [1])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130342912 unmapped: 11378688 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554567 data_alloc: 251658240 data_used: 34070528
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132513792 unmapped: 9207808 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f7e49000/0x0/0x4ffc00000, data 0x376d8c8/0x3823000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f7e49000/0x0/0x4ffc00000, data 0x376d8c8/0x3823000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f7e45000/0x0/0x4ffc00000, data 0x37718c8/0x3827000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1592305 data_alloc: 251658240 data_used: 34983936
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f7e45000/0x0/0x4ffc00000, data 0x37718c8/0x3827000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f7e45000/0x0/0x4ffc00000, data 0x37718c8/0x3827000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.034820557s of 10.063674927s, submitted: 38
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 132153344 unmapped: 9568256 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f92574780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f934d7000 session 0x561f925752c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f94882b40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 16998400 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400896 data_alloc: 234881024 data_used: 22614016
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8d4c000/0x0/0x4ffc00000, data 0x286b8b8/0x2920000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 16998400 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 16998400 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f94067e00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f943dc5a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f943dd860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 30433280 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111271936 unmapped: 30449664 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 31473664 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037757 data_alloc: 218103808 data_used: 4718592
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 31473664 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 31473664 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037150 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.915751457s of 15.942548752s, submitted: 41
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037018 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037018 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037018 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fb042000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110280704 unmapped: 31440896 heap: 141721600 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.887767792s of 11.889199257s, submitted: 1
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f934d6400 session 0x561f916d0960
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 40525824 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 40525824 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 40525824 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa6a0000/0x0/0x4ffc00000, data 0xf19898/0xfcc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f948a14a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 40493056 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109235 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 40501248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176551 data_alloc: 234881024 data_used: 14622720
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa6a0000/0x0/0x4ffc00000, data 0xf19898/0xfcc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 39526400 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176551 data_alloc: 234881024 data_used: 14622720
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.633050919s of 13.654164314s, submitted: 19
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa6a0000/0x0/0x4ffc00000, data 0xf19898/0xfcc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d8a000/0x0/0x4ffc00000, data 0x182f898/0x18e2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256699 data_alloc: 234881024 data_used: 15204352
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cfa000/0x0/0x4ffc00000, data 0x18b7898/0x196a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120012800 unmapped: 31211520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252531 data_alloc: 234881024 data_used: 15204352
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9ce3000/0x0/0x4ffc00000, data 0x18d6898/0x1989000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.823400497s of 12.888879776s, submitted: 92
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253139 data_alloc: 234881024 data_used: 15212544
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9cd6000/0x0/0x4ffc00000, data 0x18e3898/0x1996000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 31195136 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 31080448 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 31080448 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253771 data_alloc: 234881024 data_used: 15212544
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f000 session 0x561f931d7680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f91f5d860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f9485bc20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530fc00 session 0x561f91624b40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f943ddc20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f000 session 0x561f936c90e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f935a6000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f93724000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f93725c20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f963b000/0x0/0x4ffc00000, data 0x1f7d8fa/0x2031000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 30441472 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120782848 unmapped: 30441472 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f963b000/0x0/0x4ffc00000, data 0x1f7d8fa/0x2031000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120856576 unmapped: 30367744 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f963b000/0x0/0x4ffc00000, data 0x1f7d8fa/0x2031000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f91fb7a40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120889344 unmapped: 30334976 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120889344 unmapped: 30334976 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315215 data_alloc: 234881024 data_used: 15212544
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.798554420s of 12.838501930s, submitted: 53
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f92573680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 30588928 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 120217600 unmapped: 31006720 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 28614656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9616000/0x0/0x4ffc00000, data 0x1fa191d/0x2056000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 28614656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9616000/0x0/0x4ffc00000, data 0x1fa191d/0x2056000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 28614656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357201 data_alloc: 234881024 data_used: 20664320
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122609664 unmapped: 28614656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9616000/0x0/0x4ffc00000, data 0x1fa191d/0x2056000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 28557312 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122667008 unmapped: 28557312 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9616000/0x0/0x4ffc00000, data 0x1fa191d/0x2056000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 28508160 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122716160 unmapped: 28508160 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1357653 data_alloc: 234881024 data_used: 20668416
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.206544876s of 10.224551201s, submitted: 20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126771200 unmapped: 24453120 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127172608 unmapped: 24051712 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 24043520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 24043520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f895a000/0x0/0x4ffc00000, data 0x2c5791d/0x2d0c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127180800 unmapped: 24043520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471027 data_alloc: 234881024 data_used: 21590016
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127229952 unmapped: 23994368 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127229952 unmapped: 23994368 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f895d000/0x0/0x4ffc00000, data 0x2c5a91d/0x2d0f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 23986176 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127238144 unmapped: 23986176 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 23977984 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470131 data_alloc: 234881024 data_used: 21594112
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127246336 unmapped: 23977984 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.881290436s of 10.984512329s, submitted: 180
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8934000/0x0/0x4ffc00000, data 0x2c8391d/0x2d38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1472299 data_alloc: 234881024 data_used: 21581824
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8934000/0x0/0x4ffc00000, data 0x2c8391d/0x2d38000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127254528 unmapped: 23969792 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8931000/0x0/0x4ffc00000, data 0x2c8691d/0x2d3b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470231 data_alloc: 234881024 data_used: 21581824
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f892e000/0x0/0x4ffc00000, data 0x2c8991d/0x2d3e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.997168541s of 11.008128166s, submitted: 21
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8929000/0x0/0x4ffc00000, data 0x2c8e91d/0x2d43000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471191 data_alloc: 234881024 data_used: 21581824
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8926000/0x0/0x4ffc00000, data 0x2c9191d/0x2d46000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8926000/0x0/0x4ffc00000, data 0x2c9191d/0x2d46000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470743 data_alloc: 234881024 data_used: 21581824
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8923000/0x0/0x4ffc00000, data 0x2c9491d/0x2d49000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471335 data_alloc: 234881024 data_used: 21581824
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.687617302s of 15.696373940s, submitted: 9
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f891e000/0x0/0x4ffc00000, data 0x2c9991d/0x2d4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471207 data_alloc: 234881024 data_used: 21581824
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471291 data_alloc: 234881024 data_used: 21581824
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f891e000/0x0/0x4ffc00000, data 0x2c9991d/0x2d4e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127262720 unmapped: 23961600 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 23945216 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 23945216 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127279104 unmapped: 23945216 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473307 data_alloc: 234881024 data_used: 21569536
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f891b000/0x0/0x4ffc00000, data 0x2c9c91d/0x2d51000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.476576805s of 12.488263130s, submitted: 20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471383 data_alloc: 234881024 data_used: 21569536
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8916000/0x0/0x4ffc00000, data 0x2ca191d/0x2d56000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8913000/0x0/0x4ffc00000, data 0x2ca491d/0x2d59000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471807 data_alloc: 234881024 data_used: 21569536
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127320064 unmapped: 23904256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8913000/0x0/0x4ffc00000, data 0x2ca491d/0x2d59000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.613974571s of 11.620789528s, submitted: 6
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f91630d20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc400 session 0x561f943e2000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e800 session 0x561f93ba45a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1276303 data_alloc: 234881024 data_used: 15196160
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9863000/0x0/0x4ffc00000, data 0x1920898/0x19d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9863000/0x0/0x4ffc00000, data 0x1920898/0x19d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 124035072 unmapped: 27189248 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f948a1860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f9346d0e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e800 session 0x561f94067860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9889000/0x0/0x4ffc00000, data 0x1920898/0x19d3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068175 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068175 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd800 session 0x561f936c9680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e400 session 0x561f937670e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068175 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1068175 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116768768 unmapped: 34455552 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f93ba50e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd400 session 0x561f91631a40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd800 session 0x561f9346c000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530ec00 session 0x561f943dcb40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.735616684s of 25.796653748s, submitted: 96
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f91fb7e00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099441 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa93f000/0x0/0x4ffc00000, data 0x86a898/0x91d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f916d0b40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118554624 unmapped: 32669696 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1099441 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f916d1680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f916d1c20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa93f000/0x0/0x4ffc00000, data 0x86a898/0x91d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f935a72c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118505472 unmapped: 32718848 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118513664 unmapped: 32710656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118513664 unmapped: 32710656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118513664 unmapped: 32710656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa93e000/0x0/0x4ffc00000, data 0x86a8a8/0x91e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118513664 unmapped: 32710656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120892 data_alloc: 218103808 data_used: 7512064
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd800 session 0x561f94878f00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.444226265s of 12.461299896s, submitted: 19
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc400 session 0x561f94838000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118513664 unmapped: 32710656 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f936c9e00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074433 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 nova_compute[228704]: 2025-11-25 10:11:14.344 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 33955840 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f9346d2c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f9485a5a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f936c8960
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 33120256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f91630b40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f948794a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f93ba5e00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 33120256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137758 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 33120256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5da000/0x0/0x4ffc00000, data 0xbcf898/0xc82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118104064 unmapped: 33120256 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f935a6780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc400 session 0x561f935a74a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118120448 unmapped: 33103872 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118120448 unmapped: 33103872 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5da000/0x0/0x4ffc00000, data 0xbcf898/0xc82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f935a7860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.693018913s of 13.733590126s, submitted: 58
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f91f5d0e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 34054144 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137575 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5da000/0x0/0x4ffc00000, data 0xbcf898/0xc82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 34037760 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 34439168 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5da000/0x0/0x4ffc00000, data 0xbcf898/0xc82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa5da000/0x0/0x4ffc00000, data 0xbcf898/0xc82000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 34439168 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 34439168 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f94067680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f9208da40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cd000 session 0x561f948a0960
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 35528704 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083927 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 35528704 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115695616 unmapped: 35528704 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083943 data_alloc: 218103808 data_used: 4718592
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083943 data_alloc: 218103808 data_used: 4718592
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083943 data_alloc: 218103808 data_used: 4718592
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.829650879s of 21.862865448s, submitted: 51
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac32000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 115720192 unmapped: 35504128 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f94067e00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f93481680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35028992 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193586 data_alloc: 218103808 data_used: 4718592
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116195328 unmapped: 35028992 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d71000/0x0/0x4ffc00000, data 0x14378fa/0x14eb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 34947072 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116277248 unmapped: 34947072 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f948a0b40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 34701312 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 116391936 unmapped: 34832384 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195862 data_alloc: 218103808 data_used: 4816896
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d4d000/0x0/0x4ffc00000, data 0x145b8fa/0x150f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1290406 data_alloc: 234881024 data_used: 18866176
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9d4d000/0x0/0x4ffc00000, data 0x145b8fa/0x150f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121569280 unmapped: 29655040 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.984073639s of 18.020271301s, submitted: 42
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 20914176 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387620 data_alloc: 234881024 data_used: 19660800
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9222000/0x0/0x4ffc00000, data 0x1f868fa/0x203a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130203648 unmapped: 21020672 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 21004288 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 21004288 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 21004288 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 21004288 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400426 data_alloc: 234881024 data_used: 19869696
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9194000/0x0/0x4ffc00000, data 0x20138fa/0x20c7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 20971520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 20971520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 20971520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9174000/0x0/0x4ffc00000, data 0x20348fa/0x20e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 20971520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 20971520 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399554 data_alloc: 234881024 data_used: 19906560
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.606201172s of 11.678620338s, submitted: 110
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130359296 unmapped: 20865024 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f934732c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e400 session 0x561f93472d20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f93ba4b40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100573 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100573 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100573 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 32768000 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 32759808 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread fragmentation_score=0.000501 took=0.000042s
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 32759808 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100573 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4faa6c000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 32759808 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 32759808 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f948a12c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f91f5c000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f948a1a40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530e800 session 0x561f91624f00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.195787430s of 22.213541031s, submitted: 29
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118611968 unmapped: 32612352 heap: 151224320 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f9208c1e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f91631860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f943dda40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f936c90e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f000 session 0x561f9346c5a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x11698a8/0x121d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x11698a8/0x121d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1190084 data_alloc: 218103808 data_used: 4726784
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x11698a8/0x121d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f94878b40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118759424 unmapped: 36134912 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f93ba5860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa03f000/0x0/0x4ffc00000, data 0x11698a8/0x121d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f935a6f00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f920c2d20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118906880 unmapped: 35987456 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194598 data_alloc: 218103808 data_used: 4726784
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 35979264 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 33488896 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121405440 unmapped: 33488896 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33480704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x118d8b8/0x1242000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33480704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278634 data_alloc: 234881024 data_used: 17166336
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33480704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33480704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 121413632 unmapped: 33480704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91647400 session 0x561f91fb7e00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f91fb74a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f91fb72c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f91fb7680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.770525932s of 15.796881676s, submitted: 26
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f91fb6000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08800 session 0x561f948792c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f94878780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f9347af00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f91f5d680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122028032 unmapped: 32866304 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa01a000/0x0/0x4ffc00000, data 0x118d8b8/0x1242000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 122060800 unmapped: 32833536 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339484 data_alloc: 234881024 data_used: 17166336
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 128753664 unmapped: 26140672 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f940663c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09c00 session 0x561f91fb70e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f91fb7a40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127229952 unmapped: 27664384 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f937661e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 127574016 unmapped: 27320320 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92ec000/0x0/0x4ffc00000, data 0x1eb795c/0x1f70000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451231 data_alloc: 234881024 data_used: 24518656
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92ec000/0x0/0x4ffc00000, data 0x1eb795c/0x1f70000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133046272 unmapped: 21848064 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451247 data_alloc: 234881024 data_used: 24518656
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f92ec000/0x0/0x4ffc00000, data 0x1eb795c/0x1f70000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 21815296 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 133079040 unmapped: 21815296 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.282657623s of 13.364300728s, submitted: 110
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1537375 data_alloc: 234881024 data_used: 25477120
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f880d000/0x0/0x4ffc00000, data 0x299695c/0x2a4f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138657792 unmapped: 16236544 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f87ee000/0x0/0x4ffc00000, data 0x29b595c/0x2a6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f87ee000/0x0/0x4ffc00000, data 0x29b595c/0x2a6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1536631 data_alloc: 234881024 data_used: 25481216
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f87ee000/0x0/0x4ffc00000, data 0x29b595c/0x2a6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138780672 unmapped: 16113664 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.975310326s of 12.046205521s, submitted: 111
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138788864 unmapped: 16105472 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1536935 data_alloc: 234881024 data_used: 25481216
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f87e1000/0x0/0x4ffc00000, data 0x29c295c/0x2a7b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138821632 unmapped: 16072704 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f87e1000/0x0/0x4ffc00000, data 0x29c295c/0x2a7b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138862592 unmapped: 16031744 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138862592 unmapped: 16031744 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f948a12c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f9346cd20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 138870784 unmapped: 16023552 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f08400 session 0x561f948381e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 18325504 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349479 data_alloc: 234881024 data_used: 17551360
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 18325504 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 18325504 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f99e6000/0x0/0x4ffc00000, data 0x17bf8b8/0x1874000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f9485a3c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530ec00 session 0x561f936c8960
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f916d0000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126994 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126994 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126994 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fabca000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 126967808 unmapped: 27926528 heap: 154894336 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1126994 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f93767a40
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93ee0c00 session 0x561f937672c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f93766000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f93767860
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.702108383s of 26.754915237s, submitted: 81
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530ec00 session 0x561f93766780
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f934801e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f800 session 0x561f91625680
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f931d5c20
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f916d03c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fac31000/0x0/0x4ffc00000, data 0x5778c1/0x62b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530ec00 session 0x561f948a05a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f948a10e0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210092 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f93f09800 session 0x561f948a0000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f91727800 session 0x561f925743c0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129720320 unmapped: 33570816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa1e2000/0x0/0x4ffc00000, data 0xfc590a/0x107a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129728512 unmapped: 33562624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa1e2000/0x0/0x4ffc00000, data 0xfc590a/0x107a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283506 data_alloc: 234881024 data_used: 15085568
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa1e2000/0x0/0x4ffc00000, data 0xfc590a/0x107a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fa1e2000/0x0/0x4ffc00000, data 0xfc590a/0x107a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283506 data_alloc: 234881024 data_used: 15085568
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 129859584 unmapped: 33431552 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.073707581s of 16.113704681s, submitted: 50
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135405568 unmapped: 27885568 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135364608 unmapped: 27926528 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135364608 unmapped: 27926528 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8766000/0x0/0x4ffc00000, data 0x18a190a/0x1956000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135364608 unmapped: 27926528 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358618 data_alloc: 234881024 data_used: 15777792
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135364608 unmapped: 27926528 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135446528 unmapped: 27844608 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135446528 unmapped: 27844608 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8766000/0x0/0x4ffc00000, data 0x18a190a/0x1956000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135446528 unmapped: 27844608 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 27836416 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356882 data_alloc: 234881024 data_used: 15781888
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 27836416 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f8747000/0x0/0x4ffc00000, data 0x18c090a/0x1975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 27836416 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 27836416 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.588966370s of 11.650872231s, submitted: 111
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 27836416 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f924cc000 session 0x561f92574f00
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530ec00 session 0x561f934734a0
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 ms_handle_reset con 0x561f9530f400 session 0x561f948a0960
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130580480 unmapped: 32710656 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'config diff' '{prefix=config diff}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'config show' '{prefix=config show}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130310144 unmapped: 32980992 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 33103872 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'log dump' '{prefix=log dump}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 33038336 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'perf dump' '{prefix=perf dump}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'perf schema' '{prefix=perf schema}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130007040 unmapped: 33284096 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130007040 unmapped: 33284096 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130007040 unmapped: 33284096 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130007040 unmapped: 33284096 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130023424 unmapped: 33267712 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130023424 unmapped: 33267712 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130023424 unmapped: 33267712 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130023424 unmapped: 33267712 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130023424 unmapped: 33267712 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130023424 unmapped: 33267712 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130023424 unmapped: 33267712 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130031616 unmapped: 33259520 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 33251328 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3477 syncs, 3.28 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4105 writes, 14K keys, 4105 commit groups, 1.0 writes per commit group, ingest: 17.00 MB, 0.03 MB/s#012Interval WAL: 4105 writes, 1816 syncs, 2.26 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 33251328 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 33251328 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 33251328 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 33251328 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 33251328 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 33251328 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130039808 unmapped: 33251328 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 33243136 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130048000 unmapped: 33243136 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139789 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a91000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130056192 unmapped: 33234944 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 99.739044189s of 99.761985779s, submitted: 41
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130105344 unmapped: 33185792 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130121728 unmapped: 33169408 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 32841728 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 32833536 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 32833536 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 32833536 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 32833536 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 32833536 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 32833536 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 32833536 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130457600 unmapped: 32833536 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 32825344 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 32825344 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 32825344 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 32825344 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 32825344 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 32825344 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 32825344 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 32825344 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 32817152 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 32817152 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 32817152 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 32817152 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 32817152 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 32817152 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 32817152 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130473984 unmapped: 32817152 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 32808960 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 32808960 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 32808960 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 32808960 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 32808960 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 32808960 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 32808960 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 32808960 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130482176 unmapped: 32808960 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 32800768 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 32792576 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 32792576 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130498560 unmapped: 32792576 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: mgrc ms_handle_reset ms_handle_reset con 0x561f91974000
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/92811439
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/92811439,v1:192.168.122.100:6801/92811439]
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: mgrc handle_mgr_configure stats_period=5
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130646016 unmapped: 32645120 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130646016 unmapped: 32645120 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130646016 unmapped: 32645120 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130662400 unmapped: 32628736 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130662400 unmapped: 32628736 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 32743424 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 32743424 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 32743424 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130547712 unmapped: 32743424 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130555904 unmapped: 32735232 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130564096 unmapped: 32727040 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130572288 unmapped: 32718848 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130588672 unmapped: 32702464 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 32694272 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130605056 unmapped: 32686080 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 32677888 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130621440 unmapped: 32669696 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 32653312 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130646016 unmapped: 32645120 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130646016 unmapped: 32645120 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130646016 unmapped: 32645120 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130646016 unmapped: 32645120 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130646016 unmapped: 32645120 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130654208 unmapped: 32636928 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130662400 unmapped: 32628736 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130662400 unmapped: 32628736 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130662400 unmapped: 32628736 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130662400 unmapped: 32628736 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130662400 unmapped: 32628736 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130662400 unmapped: 32628736 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130662400 unmapped: 32628736 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130662400 unmapped: 32628736 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130670592 unmapped: 32620544 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 32612352 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130686976 unmapped: 32604160 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 32595968 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 32595968 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 32595968 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 32595968 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 32595968 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 32595968 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 32595968 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130695168 unmapped: 32595968 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 32587776 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 32587776 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 32587776 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 32587776 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 32587776 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 32587776 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 32587776 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130703360 unmapped: 32587776 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130719744 unmapped: 32571392 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130736128 unmapped: 32555008 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130744320 unmapped: 32546816 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130752512 unmapped: 32538624 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 32530432 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 32522240 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 32522240 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 32522240 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 32522240 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 32522240 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 32522240 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 32522240 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 32522240 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 32522240 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 32522240 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 32514048 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 32505856 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 32497664 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 32497664 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 32497664 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 32497664 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 32497664 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 32497664 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139497 data_alloc: 218103808 data_used: 4722688
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: osd.2 144 heartbeat osd_stat(store_statfs(0x4f9a92000/0x0/0x4ffc00000, data 0x577898/0x62a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'config diff' '{prefix=config diff}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'config show' '{prefix=config show}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'counter dump' '{prefix=counter dump}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'counter schema' '{prefix=counter schema}'
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: prioritycache tune_memory target: 4294967296 mapped: 130629632 unmapped: 32661504 heap: 163291136 old mem: 2845415832 new mem: 2845415832
Nov 25 05:11:14 np0005534696 ceph-osd[77914]: do_command 'log dump' '{prefix=log dump}'
Nov 25 05:11:14 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 25 05:11:14 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3730769747' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 25 05:11:14 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Nov 25 05:11:14 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3024921306' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 25 05:11:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Nov 25 05:11:15 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2441670351' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 25 05:11:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:15 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:15 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:15.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:15 np0005534696 nova_compute[228704]: 2025-11-25 10:11:15.264 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:11:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Nov 25 05:11:15 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3914276966' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 25 05:11:15 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:15 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:15 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:15.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:15 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:11:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:15 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:11:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:16 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:11:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:16 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:11:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:16 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:11:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:16 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:16 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Nov 25 05:11:16 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/94335548' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 25 05:11:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Nov 25 05:11:16 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2661222347' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 25 05:11:16 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Nov 25 05:11:16 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/869887195' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 25 05:11:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:17 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:17 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:17.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Nov 25 05:11:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2615350652' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 25 05:11:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Nov 25 05:11:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2257282358' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 25 05:11:17 np0005534696 podman[252746]: 2025-11-25 10:11:17.392077413 +0000 UTC m=+0.096737830 container health_status f6c5409dc99bf94248355b092deb47c4b8eab5c215ed1c6e386ba9e2fe6907b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a9583cb3baf440d2358ef041373833afbeae60da8159dd031502379901141620', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 05:11:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Nov 25 05:11:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1334765920' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 25 05:11:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Nov 25 05:11:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3855566086' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 25 05:11:17 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Nov 25 05:11:17 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3776552969' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 25 05:11:17 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:17 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:17 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:17.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:18 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:18 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Nov 25 05:11:18 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3984037843' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 25 05:11:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Nov 25 05:11:18 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/898889878' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 25 05:11:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Nov 25 05:11:18 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3364237' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 25 05:11:18 np0005534696 systemd[1]: Starting Hostname Service...
Nov 25 05:11:18 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Nov 25 05:11:18 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/891235372' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 25 05:11:18 np0005534696 systemd[1]: Started Hostname Service.
Nov 25 05:11:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:19 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:19 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:19.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:19 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 25 05:11:19 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2073664472' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 05:11:19 np0005534696 nova_compute[228704]: 2025-11-25 10:11:19.347 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:11:19 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:19 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:19 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:19.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:19 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Nov 25 05:11:19 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2504291124' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 25 05:11:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:20 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:20 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Nov 25 05:11:20 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1644087196' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 25 05:11:20 np0005534696 nova_compute[228704]: 2025-11-25 10:11:20.263 228708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 25 05:11:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Nov 25 05:11:20 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3552601814' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 25 05:11:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 25 05:11:20 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2191149956' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 05:11:20 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 05:11:20 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 05:11:20 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 25 05:11:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 25 05:11:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 25 05:11:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:20 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 25 05:11:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-nfs-cephfs-1-0-compute-2-jouchy[235744]: 25/11/2025 10:11:21 : epoch 69257df1 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 25 05:11:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:21 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:21 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.001000011s ======
Nov 25 05:11:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:21.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Nov 25 05:11:21 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Nov 25 05:11:21 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3774492437' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 25 05:11:21 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 05:11:21 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 05:11:21 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:21 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:21 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.102 - anonymous [25/Nov/2025:10:11:21.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 25 05:11:21 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Nov 25 05:11:21 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4227443116' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 25 05:11:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:22 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:22 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:22 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 25 05:11:22 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 25 05:11:23 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Nov 25 05:11:23 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2843991185' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 25 05:11:23 np0005534696 ceph-mon[75508]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Nov 25 05:11:23 np0005534696 ceph-mon[75508]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/300601334' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 25 05:11:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-nfs-cephfs-compute-2-opynes[88167]: Tue Nov 25 10:11:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:23 np0005534696 ceph-af1c9ae3-08d7-5547-a53d-2cccf7c6ef90-keepalived-rgw-default-compute-2-aswfow[85432]: Tue Nov 25 10:11:23 2025: (VI_0) received an invalid passwd!
Nov 25 05:11:23 np0005534696 radosgw[81011]: ====== starting new request req=0x7fe3499d55d0 =====
Nov 25 05:11:23 np0005534696 radosgw[81011]: ====== req done req=0x7fe3499d55d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 25 05:11:23 np0005534696 radosgw[81011]: beast: 0x7fe3499d55d0: 192.168.122.100 - anonymous [25/Nov/2025:10:11:23.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
